Model opt-iml-max-1.3b
namespace | model name | standby gpu | standby pageable | standby pinned memory | gpu count | vRam (MB) | cpu | memory (MB) | state | revision |
---|---|---|---|---|---|---|---|---|---|---|
opt-iml-max-1.3b | Mem | File | Mem | 1 | 3800 | 12.0 | 15000 | Normal | 113 |
Image
Prompt
Sample Rest Call
Pods
tenant | namespace | pod name | state | require resource | allocated resource |
---|---|---|---|---|---|
public | public/facebook/opt-iml-max-1.3b/113/950 | Standby | {'CPU': 12000, 'Mem': 15000, 'GPU': {'Type': 'Any', 'Count': 1, 'vRam': 3800}} | {'nodename': 'node3', 'CPU': 12000, 'Mem': 15000, 'GPUType': 'A4000', 'GPUs': {'vRam': 0, 'map': {}, 'slotSize': 0, 'totalSlotCnt': 0}, 'MaxContextPerGPU': 2} |
Func
{ "image": "vllm/vllm-openai:v0.6.2", "commands": [ "--model", "facebook/opt-iml-max-1.3b", "--enforce-eager", "--max-model-len", "200" ], "envs": [ [ "LD_LIBRARY_PATH", "/usr/local/lib/python3.12/dist-packages/nvidia/cuda_nvrtc/lib/:$LD_LIBRARY_PATH" ] ], "mounts": [ { "hostpath": "/home/brad/cache", "mountpath": "/root/.cache/huggingface" } ], "endpoint": { "port": 8000, "schema": "Http", "probe": "/health" }, "version": 113, "entrypoint": [], "resources": { "CPU": 12000, "Mem": 15000, "GPU": { "Type": "Any", "Count": 1, "vRam": 3800 } }, "standby": { "gpu": "Mem", "pageable": "File", "pinned": "Mem" }, "probe": { "port": 80, "schema": "Http", "probe": "/health" }, "sample_query": { "apiType": "openai", "path": "v1/completions", "prompt": "What is the capital of USA?", "body": { "max_tokens": "100", "model": "facebook/opt-iml-max-1.3b", "stream": "true", "temperature": "0" } } } |