I’ve set up the full Ollama infrastructure with a WebUI, and I’ve downloaded several different models on my machine (64GB DDR5 RAM and an RTX 3090 with 24GB VRAM).
qwen3-coder:30b 18gb
llama3.2-vision:11b 7.8gb
qwen3.5:27b 17gb
qwen3.6:35b 23gb
My main use case is scripting.
I don’t know if anyone here works with 3D, but I tried to create a simple button in Blender that performs a library override on a selected linked object — basically a function Blender already has, but I wanted it as a one-click button instead of going through the menu every time. From there, I planned to expand and customize it further.
The problem is that I never managed to get it working. With the free version of ChatGPT, I was able to build the same tool in about 70 lines of code. With Ollama, despite trying different models and many attempts over two days, I couldn’t get a single working result. Even when asking for detailed explanations, it still didn’t work as expected.
Why is that?
Do local AI setups actually work for this kind of task, or do we basically have to pay for tools like Claude Code or Codex?
I understand that ChatGPT, even in the free version, likely runs on models with massive VRAM (100GB+), but if that’s the case, are local models essentially useless if they can’t generate even a simple ~100-line tool?
Thanks