u/Affectionate_Buy3197

Long-time OpenAI Pro subscriber here. Last week I got permanently banned and my appeal was denied. Apparently I was guilty of "cyber abuse." What did I do? I built a web scraper for a client whose app scans product labels. That's it, no nuance, just banned. I'm done.

Spent the last few days testing Chinese models and honestly? I'm sold. Extremely competent, fast improving, and I don't have to worry about a TOS team pulling the rug out from under a paying client project. Going full local.

I want to run:

Qwen3 35B A3B (MoE)
GLM-4
MiniMax

The three cards I'm considering:

AMD Radeon AI PRO R9700

Intel Arc Pro B70. I genuinely don't know how well supported it is in llama.cpp

Used RTX 3090. I have 3 local listings near me right now and I can get one for slightly less than a new R9700

I'm planning to start with two cards from day one, and eventually scale further. The 3090s would prove difficult to get my hands on for multiple cards I think and I have no idea how they play together, never owned or used nvidia in my life.

Which of these three would you actually choose?

Is multi-3090 actually viable?

Appreciate any input. Looking forward to be free of the API subscription treadmill.

reddit.com
u/Affectionate_Buy3197 — 9 days ago