TL;DR: On Qwen 3.6, using qwen3.5-enhanced.jinja with preserve_thinking=true tends to stack broken think markup in the prompt: the model sometimes emits <tool_call> without a closing </think>, the 3.5 template does not repair that, and the 3.6 assistant branch can double-wrap turns—so you get ignored tool calls, reasoning leaking into tool turns, and preserve_thinking=false as a workaround (strip earlier think from history). I ship qwen3.6-enhanced.jinja with a small self-healing step before the reasoning split so </think> is inserted when needed before tool_call>, which makes preserve_thinking usable again for 3.6. Proof repo: qwen36_27B_36jinja_project; templates live beside qwen3.5-enhanced in the same GitHub repo. Launch script in the post is what I run on vLLM v0.19.0 (qwen3_coder, preserve_thinking: true, qwen3.6-enhanced.jinja).
Full write-up (RCA, Jinja snippet, env + vllm serve flags, version note):
https://allanchan339.github.io/bug-fixes/2026/05/02/Qwen36-27B-updated-jinja.html
Previous write-ups: https://www.reddit.com/r/LocalLLM/comments/1sv6cqk/follow_up_tested_tool_calling_fixes_for_qwen/