Qwen Code no longer working with local models?
A few weeks ago I set up Qwen Code and the Qwen telegram bot with Qwen 3.6 26b running locally on llama-cpp. It all worked great. Fast-forward a few weeks and now it throws API errors and neither are able to communicate with the local LLM. Is this a known issue, or a configuration issue? I'm going crazy trying and failing to find the cause in my config.
Edit: Claude Code works FWIW, but Qwen Code felt faster and more optimized when using Qwen 3.6.