▲ 7 r/IntelArc
Intel B50 GPUfor LLM
I have been using IPEX-LLM to run models and it works pretty fine.. sadly the repository is no longer maintained by Intel.. so I was looking for alternatives and doing some testing:
ollama with vulkan - pretty slow
llama.cpp for Intel - not reliable
vLLM - could not get it working
Anyone here using an Intel B50 GPU for LLM? what are you using?
u/matheus2308 — 24 hours ago