V2EX = way to explore
V2EX 是一个关于分享和探索的地方
Sign Up Now
For Existing Member  Sign In
V2EX  ›  ideniece  ›  全部回复第 1 页 / 共 1 页
回复总数  4
没事了。
@ideniece #56
我在运行本地模型,为什么去提示去下载?

kaiwu.exe run .\Qwen3.5-35B-A3B-Q4_K_M.gguf
[2/6] Selecting configuration...
Model: Qwen3.5-35B-A3B (moe, 37B total / 1B active)
Quant: Q4_K_M (20.5 GB)
Mode: moe_offload (experts on CPU)
Accel: Flash Attention + SWA-Full (hybrid arch)

[3/6] Checking files...
Using bundled iso3 binary: llama-server-cuda.exe
Binary: llama-server-cuda.exe [cached]
Downloading model: Qwen3.5-35B-A3B-Q4_K_M.gguf
From: https://hf-mirror.com//resolve/main/Qwen3.5-35B-A3B-Q4_K_M.gguf
Error: failed to ensure model file: failed to download model: failed to download: Get "https://hf-mirror.com//resolve/main/Qwen3.5-35B-A3B-Q4_K_M.gguf": EOF
20.58 万
支持楼主
About   ·   Help   ·   Advertise   ·   Blog   ·   API   ·   FAQ   ·   Solana   ·   5933 Online   Highest 6679   ·     Select Language
创意工作者们的社区
World is powered by solitude
VERSION: 3.9.8.5 · 19ms · UTC 06:26 · PVG 14:26 · LAX 23:26 · JFK 02:26
♥ Do have faith in what you're doing.