u/Aggressive-Low3345

▲ 9 r/Agent_AI+1 crossposts

https://i.redd.it/o060omd85rzg1.gif

One aspect I found interesting in Spring Agent Flow is the attempt to model multi-agent orchestration using familiar Spring patterns instead of building everything around prompt chaining abstractions.

The project exposes practical concerns that usually get ignored in demos:

- agent coordination

- workflow state management

- task routing

- shared memory/context

- human approval flows

- observability in long-running workflows

Live demo:

https://huggingface.co/spaces/datallmhub/multi-agent-customer-ops

Repo:

https://github.com/datallmhub/spring-agent-flow

I’m curious how other teams are handling orchestration complexity once moving beyond single-agent systems.

reddit.com
u/Aggressive-Low3345 — 6 days ago
▲ 2 r/GithubCopilot+1 crossposts

I came across this interesting idea and thought it was worth sharing.

We all use LLMs for things like code review, security analysis, or general reasoning.

Most of the time, the answers look correct — well written, confident, structured.

But on more complex tasks, they can contain subtle issues:

- missing evidence

- vague reasoning

- even internal contradictions

The tricky part is that nothing in the loop really checks that.

This tool takes a different approach: instead of asking the model to “do better”, it runs the output through a second model that acts as a critic.

Simple example:

Claude: "This system is secure"

Critic: LOW (3/10)

- missing evidence

- contradiction detected

→ immediate signal that something is off

It doesn’t try to replace the original model — it just audits the output and highlights weak spots.

What I found interesting is that it returns a structured verdict:

- confidence score

- validated vs challenged findings

- explicit issues (missing evidence, vague claims, contradictions)

So instead of re-running everything, you can quickly see what part of the reasoning doesn’t hold up.

It’s built as an MCP tool, so it plugs into Claude Desktop / Claude Code workflows.

Repo:

https://github.com/datallmhub/llm-critic-mcp

Curious if anyone here is using a similar pattern , one LLM to critique another?

u/Aggressive-Low3345 — 13 days ago
▲ 9 r/Agent_AI+2 crossposts

Hi,

I came across this project recently:
https://github.com/datallmhub/spring-ai-agents

It looks like a stateful multi-agent orchestration framework built on top of Spring AI, which I don’t see very often on the Java side.

From what I understand, it provides:

  • Graph-based execution (subgraphs, parallel fan-out)
  • Stateful agents with checkpointing (resume after restart)
  • Multi-agent coordination (routing strategies)
  • Built-in resilience (retry / circuit breaker)
  • Tool call recording for audit/debug

What caught my attention is that it seems to go beyond typical LLM wrappers and actually provides a runtime for executing agent workflows, rather than just relying on prompt-driven orchestration.

Spring AI already documents agentic patterns (routing, sub-agents, etc...), but this seems to focus more on execution control (state, graph, resilience).

I’m curious:

  • How does this compare to what people are building with Spring AI today?
  • Is this level of orchestration actually useful in production, or overkill?
  • Are there other similar approaches in the Java ecosystem?

Thanks

reddit.com
u/Aggressive-Low3345 — 12 days ago