logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)

Action

Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public baichuan-inc public/baichuan-inc/Baichuan-7B/175/1211 Standby node2 2 13800 MB Restore Blob : 25084 MB Blob : 1590 MB Blob : 8192 MB 0 {}
public baichuan-inc public/baichuan-inc/Baichuan2-7B-Chat/177/1174 Standby node2 2 13800 MB Restore Blob : 24418 MB Blob : 1870 MB Blob : 8192 MB 0 {}