logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)

Action

Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public openbmb public/openbmb/MiniCPM-2B-dpo-bf16/208/1191 Standby node2 1 13800 MB Restore Blob : 12704 MB Blob : 1308 MB Blob : 5120 MB 0 {}
public openbmb public/openbmb/MiniCPM-2B-sft-bf16/210/1205 Standby node2 1 9000 MB Restore Blob : 8544 MB Blob : 1308 MB Blob : 5120 MB 0 {}