logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)

Action

Pods

tenant namespace pod name state Node name Req Gpu Count Req Gpu vRam Type Standby allocated GPU vRam (MB) allocated GPU Slots
gpu pageable pinned
public facebook public/facebook/opt-iml-max-1.3b/127/1184 Standby node2 1 3800 MB Restore Mem : 3634 MB File : 1254 MB Mem : 6144 MB 0 {}