
▲ 6 r/LocalLLaMA
I gave some math problems to Qwen 3.5 27B and Qwen 3.6 27B and they got all of them right, pretty smart models I would say, but very slow and electricity consuming, they took like 5 mins with my GPU at 120 W to solve a problem.
The MoE models answer quite fast but their answers feel generic, I wouldn't use them for problem solving, but to study or to learn something new, they can work as a Wikipedia if i'm without Internet.
Of those, the one that I most used is Qwen3-Coder-30B, I really like this one, but it's an old model.
In the beggining of the year I also used a lot of GPT-OSS 20B.
u/Badhunter31415 — 15 days ago