Gemma 4 E4B (4-bit) executes Bash code and tool calls locally on 6GB RAM.
Hey guys just wanted to showcase another cool use-case of Gemma 4 E4B (4-bit GGUF) to showcase how powerful it is.
It completed a full repo audit by executing Bash code and tool calls locally. Runs on just 6GB RAM. It inspected files, git history, cross-checked metrics, and showcased evidence-backed candidates.
Try it via Unsloth Studio for self-healing toolcalling: https://github.com/unslothai/unsloth
Gemma 4 guide: https://unsloth.ai/docs/models/gemma-4
Let us know if you have any issues with the model btw, I know some of you had tokenizer issues which got fixed in llama.cpp so we're reuploading. Some also experienced gibberish but unsure where this is stemming from.

![[Question] How to use a local model as a Provider for Recipes in Unsloth Studio?](https://preview.redd.it/q906qcbhzzsg1.png?width=140&height=32&auto=webp&s=7f22e2c47613e620833236c2a1dfe40d1aaffcf9)