Pods
tenant | namespace | pod name | state | Node name | Req Gpu Count | Req Gpu vRam | Type | Standby | allocated GPU vRam (MB) | allocated GPU Slots | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
gpu | pageable | pinned |
![]() |
InferX AI Function Platform (Lambda Function for Inference)-- Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net) |
tenant | namespace | pod name | state | Node name | Req Gpu Count | Req Gpu vRam | Type | Standby | allocated GPU vRam (MB) | allocated GPU Slots | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
gpu | pageable | pinned |