▲ 2 r/LocalLLM
I`m having trouble choosing LLMs I can use. Can anyone help?
I don`t have a server to run the llm, but I am planning on buying a 7500 paired with either a a770(16GB) or a b580(12GB), 32GB ram and 1TB ssd.
Which model should I use?
I`m thinking about OpenAI-OSS-20b w/Ollama or Gemma 4 26B-A4B.
I`m going to use it for light coding and document work.
u/7800X_3D — 4 days ago