Model Qwen2.5-1.5B
namespace | model name | standby gpu | standby pageable | standby pinned memory | gpu count | vRam (MB) | cpu | memory (MB) | state | revision |
---|---|---|---|---|---|---|---|---|---|---|
Qwen | Qwen2.5-1.5B | Blob | Blob | Blob | 1 | 8000 | 12.0 | 18000 | Normal | 128 |
Image
Prompt
Sample Rest Call
Pods
tenant | namespace | pod name | state | require resource | allocated resource |
---|---|---|---|---|---|
public | Qwen | public/Qwen/Qwen2.5-1.5B/128/920 | Ready | {'CPU': 12000, 'Mem': 18000, 'GPU': {'Type': 'Any', 'Count': 1, 'vRam': 8000}} | {'nodename': 'node3', 'CPU': 12000, 'Mem': 35500, 'GPUType': 'A4000', 'GPUs': {'vRam': 8192, 'map': {'1': {'contextCnt': 1, 'slotCnt': 32}}, 'slotSize': 268435456, 'totalSlotCnt': 0}, 'MaxContextPerGPU': 2} |
Func
{ "image": "vllm/vllm-openai:v0.6.2", "commands": [ "--model", "Qwen/Qwen2.5-1.5B", "--disable-custom-all-reduce", "--max-model-len", "2000" ], "envs": [ [ "LD_LIBRARY_PATH", "/usr/local/lib/python3.12/dist-packages/nvidia/cuda_nvrtc/lib/:$LD_LIBRARY_PATH" ] ], "mounts": [ { "hostpath": "/home/brad/cache", "mountpath": "/root/.cache/huggingface" } ], "endpoint": { "port": 8000, "schema": "Http", "probe": "/health" }, "version": 128, "entrypoint": [], "resources": { "CPU": 12000, "Mem": 18000, "GPU": { "Type": "Any", "Count": 1, "vRam": 8000 } }, "standby": { "gpu": "Blob", "pageable": "Blob", "pinned": "Blob" }, "probe": { "port": 80, "schema": "Http", "probe": "/health" }, "sample_query": { "apiType": "openai", "path": "v1/completions", "prompt": "\u767b\u9e73\u96c0\u697c->\u738b\u4e4b\u6da3\n\u591c\u96e8\u5bc4\u5317->", "body": { "max_tokens": "1000", "model": "Qwen/Qwen2.5-1.5B", "stream": "true", "temperature": "0" } } } |