logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)



Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public llava-hf public/llava-hf/llava-1.5-7b-hf/281/952 Standby node3 1 14000 MB Restore Blob : 13946 MB Blob : 584 MB Blob : 0 MB 0 {}