logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)



Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public deepseek-ai public/deepseek-ai/DeepSeek-R1-Distill-Llama-8B/262/945 Standby node3 2 13800 MB Restore Blob : 23700 MB Blob : 1644 MB Blob : 8192 MB 0 {}
public deepseek-ai public/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/264/946 Standby node3 1 13000 MB Restore Blob : 11954 MB Blob : 1354 MB Blob : 7168 MB 0 {}
public deepseek-ai public/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/266/947 Standby node3 2 13800 MB Restore Blob : 23490 MB Blob : 1638 MB Blob : 14336 MB 0 {}
public deepseek-ai public/deepseek-ai/deepseek-llm-7b-chat/268/948 Standby node3 2 13800 MB Restore Blob : 23928 MB Blob : 1612 MB Blob : 15360 MB 0 {}
public deepseek-ai public/deepseek-ai/deepseek-math-7b-instruct/271/949 Standby node3 2 13800 MB Restore Blob : 23928 MB Blob : 1816 MB Blob : 15360 MB 0 {}