u/AdFinancial1822

Hey folks 👋

I’ve been working on an AI agent platform called Noevex, focused on real production use—not just demos.

In practice, AI systems struggle with:

  • multi-step orchestration
  • connecting multiple data sources
  • controlling agent actions
  • debugging & trust

🚀 What is Noevex?

A full-stack platform to build, run, and control AI agents in production

Includes:

  • Genesis → LLM foundation (hybrid models)
  • Helion → orchestration (planning, memory, execution)
  • Prism → multi-source retrieval
  • Iris → governance (access + policy control)
  • Argus → observability (tracing/debugging)
  • Visor → UI

🧠 Prism (beyond basic RAG)

Instead of:

query → docs → answer

We do:

query → plan → retrieve (SQL + logs + metrics + vector) → correlate → rerank → suggest action

Example:

“Users can’t access websites”

  • check metrics
  • analyze logs
  • find config change
  • match past incidents
  • retrieve runbook
  • suggest fix

🔐 Iris (critical layer)

Agents don’t just answer—they act:

  • restart services
  • push configs
  • query DBs

Most systems log after execution.

👉 Real need: control before execution

Iris provides:

  • agent → tool → env permission control
  • approval flows (HITL)
  • audit + replay

⚙️ Flow

Prism → insight
Helion → orchestration
Iris → validation
Human → approval
Helion → execution
Argus → tracing

🤔 Why this?

  • RAG = document retrieval
  • Real systems = multi-source + actions + risk

Missing pieces:

  • cross-system retrieval
  • orchestration
  • governance

❓ Curious:

  • Are you going beyond RAG?
  • How are you doing multi-source retrieval?
  • Do you control agent execution or just observe it?

Would love feedback 🙌

reddit.com
u/AdFinancial1822 — 16 days ago