
Built an iOS app that scans your local network for LLM endpoints. It's called LLM Scanner.
Been running a few different model servers across my homelab and kept needing to look up which machine had what running on it. Built this to solve that problem. Open the app, tap Scan, and it probes your subnet and shows you everything it finds.
It detects Ollama, OpenAI compatible servers (LM Studio, LocalAI, vLLM, etc.), and shows you what models are actually loaded, response times, and whether the endpoint is confirmed or just likely. You can copy endpoint URLs directly to paste into whatever client you are using, share results, or add endpoints manually if you know something is there but the scan missed it.
It does real TCP probing across your subnet so it finds things even on non-standard ports, not just checking a fixed list of known addresses.
$4.99 on the App Store. Would love feedback, especially if there are endpoint types it is missing.