@
KaiWuBOSS 跑不起来
本地大模型部署器 vv0.3.1 · llama.cpp b8864
by
llmbbs.ai · 本地 AI 技术社区
[1/6] Probing hardware...
GPU: NVIDIA GeForce RTX 5060 (SM120, 8151 MB VRAM, 448 GB/s)
RAM: 31 GB DDR4
OS: windows amd64
⚠️ CUDA 13.2 detected — known bug with low-bit quantization
If you see garbled output, downgrade driver to CUDA 13.1
Warning: RTX 50 series with CUDA 13.2 detected
Kaiwu will use CUDA 12.4 binary for stability.
[2/6] Selecting configuration...
Model: Qwen3.6 35B A3B Claude 4.7 Opus Reasoning Distilled (moe, 22B total / 1B active)
Quant: Q22 (13.5 GB)
Mode: moe_partial
Accel: Flash Attention + SWA-Full (hybrid arch)
[3/6] Checking files...
Using bundled iso3 binary: llama-server-cuda.exe
Binary: llama-server-cuda.exe [cached]
Model: Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled.i1-IQ3_XS.gguf [cached]
[4/6] Preflight check...
✓ VRAM sufficient
[5/6] Warmup benchmark...
RTX 50 系首次运行,正在编译 CUDA 内核(约 60s ,仅需一次)...
✓ CUDA 内核编译完成,后续启动将秒开
⚠ JIT 预热失败: exit status 0xc0000135
Probe 1: ctx=128K ... OOM
Probe 2: ctx=64K ... OOM
Probe 3: ctx=32K ... OOM
Probe 4: ctx=16K ... OOM
Probe 5: ctx=8K ... OOM
⚠️ Warmup failed: all ctx probes failed (tried down to 4K)
Using default parameters
[6/6] Starting server...
Waiting for llama-server to be ready (port 11434)...
⚠️ 显存不足,降低上下文至 4K 重试...
Waiting for llama-server to be ready (port 11434)...
Error: failed to start llama-server: 连续 2 次启动失败,即使最小上下文(4K)也无法运行
NVIDIA GeForce RTX 5060: 8151 MB VRAM
模型 Qwen3.6 35B A3B Claude 4.7 Opus Reasoning Distilled: ~13813 MB
KV cache (4K, q4_0): ~80 MB
预估总需: ~14917 MB
差额: 6766 MB
建议:
1. 选择更小的量化 (Q4_K_M 或 Q2_K)
2. 选择更小的模型
Usage:
kaiwu run <model> [flags]
Flags:
--bench Run benchmark after starting
--ctx-size int 手动指定上下文大小( 0=自动)
--fast Skip warmup, use cached profile
-h, --help help for run
--host string 监听地址(默认 127.0.0.1 ,用 0.0.0.0 开放局域网) (default "127.0.0.1")
--llama-server string 使用自定义 llama-server 二进制(完整路径)
--mode string 模式选择: speed/balanced/context (默认用上次选择)
--reset 清除缓存,重新 warmup 探测最优参数