logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)



Models

tenant namespace model name gpu count vram (GB) cpu memory (GB) standby state snapshot nodes revision
gpu pageable pinned
public baichuan-inc Baichuan-7B 2 13.8 20.0 60.0 Blob Blob Blob Normal ['node3'] 158
public baichuan-inc Baichuan2-7B-Chat 2 13.8 20.0 60.0 Blob Blob Blob Normal ['node3'] 160

Summary

Model Count

2

GPU Count

4

VRAM (GB)

55.2 GB

CPU Cores

40.0

Memory (GB)

120.0 GB