logo

InferX AI Function Platform (Lambda Function for Inference)

    --   Serve tens models in one box with ultra-fast (<2 sec) cold start (contact: support@inferx.net)

Action

Models

tenant namespace model name gpu count vram (GB) cpu memory (GB) standby state snapshot nodes revision
gpu pageable pinned
public facebook opt-iml-max-1.3b 1 3.8 12.0 15.0 Mem File Mem Normal ['node2'] 127

Summary

Model Count

1

Required GPU Count

1

Required VRAM (GB)

3.8 GB

Required CPU Cores

12.0

Required Memory (GB)

15.0 GB