u/I-cant_even

I am running GLM5.1 as my primary local coding LLM but when my big server is busy I spin up Qwen3.6-27B for smaller projects.

I wish the Qwen team would apply whatever magic they did to a larger model, this model is way too capable for its size compared to all the competitors.

reddit.com
u/I-cant_even — 8 days ago