u/Alternative_thunder

▲ 2 r/Rag+1 crossposts

Setup a local RAG application with weaviate/verba and Ollama running everything in local.
https://github.com/weaviate/verba
It was pretty straightforward.

Use cases:

  1. Search inside my cv -> could be used to filter out relevant candidates for a specific role
  2. Insurance policy documents -> answer questions about my coverage

Local setup:
Macbook pro with M4 Max chip / 64 GB RAM
Docker desktop
Ollama

Embedding model: qwen3-embedding:8b
Answer generation model: deepseek-r1:8b

u/Alternative_thunder — 12 days ago