Hi everyone,
I’d really appreciate advice from people running serious self-hosted AI setups.
I’m a university student working in software engineering, ML, and increasingly agentic workflows. My main Windows laptop (i9, 32GB, RTX 4060) is going in for repair, so I need a secondary machine anyway—but I’m also considering turning this into a long-term AI experimentation workstation.
Current setup:
- Windows laptop (i9 / 32GB / RTX 4060)
- External RTX 4090 (eGPU) for inference, rendering, and experimentation
Goals:
- Persistent agent sessions (Claude, browser agents, coding agents)
- Local LLMs (Ollama, Open WebUI, RAG, MCP, automation)
- Testing AI apps, workflows, fine-tuning, prototypes
- ML coursework + software development
I’m stuck between going small or going big:
Option 1 (~€1k):
- Mac mini
- GMKtec / Ryzen mini PC
Pros: cheap, operational now, leaves budget for APIs/cloud
Option 2 (€4k+):
- Used/new Mac Studio (M1/M2 Ultra, 64–128GB)
- High-end Windows/Linux workstation
- Maybe even DGX Spark
Pros: long-term local experimentation, large memory, always-on services
My main questions:
- Can a machine like Mac Studio meaningfully reduce API usage for real agent/LLM experimentation, or do you still rely heavily on OpenAI/Anthropic APIs?
- In 2026, for local AI work: Mac vs Windows/Linux?
Mac: unified memory, efficiency, polished apps
Windows/Linux: CUDA, open-source support, works with my RTX 4090
- If you had ~€4k today, would you:
A) Buy a serious workstation
B) Buy a smaller machine + spend on APIs/cloud GPUs
C) Build around the existing 4090 setup
- Anyone running M1/M2 Ultra Mac Studios for local LLMs—still worth it?
Would really appreciate honest “I overspent,” “should’ve gone bigger,” or “cloud was smarter” experiences.
Thanks!