logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)

Action

Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public allenai public/allenai/OLMo-1B-hf/219/1170 Standby node2 1 14600 MB Restore Blob : 13682 MB Blob : 1274 MB Blob : 4096 MB 0 {}
public allenai public/allenai/OLMo-1B-hf_2gpu/221/1226 Standby node2 2 14600 MB Restore Blob : 26538 MB Blob : 1634 MB Blob : 8192 MB 0 {}
public allenai public/allenai/OLMo-7B-hf/223/1172 Standby node2 2 13800 MB Restore Blob : 25248 MB Blob : 1790 MB Blob : 8192 MB 0 {}