logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)



Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public bigcode public/bigcode/starcoder2-3b/284/942 Standby node3 1 13800 MB Restore Blob : 12444 MB Blob : 1278 MB Blob : 7680 MB 0 {}
public bigcode public/bigcode/starcoder2-7b/359/943 Standby node3 2 13800 MB Restore Blob : 24450 MB Blob : 1634 MB Blob : 8192 MB 0 {}