r/GoogleGemini

Maestro v1.5.0 — multi-agent orchestration now runs on both Claude Code and Gemini CLI
▲ 4 r/GoogleGeminiAI+3 crossposts

Maestro v1.5.0 — multi-agent orchestration now runs on both Claude Code and Gemini CLI

Maestro is an open-source multi-agent orchestration platform that coordinates 22 specialized AI subagents through structured workflows — design dialogue, implementation planning, parallel execution, and quality gates.

It started as a Gemini CLI extension, and with v1.5.0 it now runs on Claude Code as a native plugin too.

Install:

# Gemini CLI
gemini extensions install https://github.com/josstei/maestro-orchestrate

# Claude Code
claude plugin marketplace add josstei/maestro-orchestrate
claude plugin install maestro@maestro-orchestrator --scope user

What's new in v1.5.0:

Claude Code support. The entire platform — all 22 agents, 12 commands, methodology skills, lifecycle hooks, MCP state management — now works as a Claude Code plugin. Agents show up with a maestro: prefix and all slash commands (/orchestrate, /review, /debug, /security-audit, etc.) work out of the box.

Deeper design and planning. The design dialogue now scales rigor by depth level. Standard mode adds inline rationale annotations on every key design decision. Deep mode adds per-decision alternatives, trade-off narration, and full requirement traceability (Traces To: REQ-N linking requirements to design decisions bidirectionally). Design sections now scale by task complexity — simple tasks get 3 concise sections, medium tasks get 5, complex tasks get all 7 with 200-300 words each. A formal revision protocol ensures revised sections are re-presented for approval inline, with conflict detection if later sections invalidate earlier decisions.

42-step orchestration backbone. Both runtimes now load the same numbered-step procedural sequence from a single shared reference file. Hard-gates enforce critical checkpoints — plan validation before presentation, per-phase state transitions, delegation-only remediation after code review. The previous loose conversational flow has been replaced with a formally structured, gate-enforced process. The orchestrate command went from 347 inlined lines (Gemini) / 773 lines (Claude) down to thin runtime preambles.

Agent capability enforcement. A new server-side validation rule catches plan misconfigurations where read-only agents get assigned to file-creating phases — before execution starts, not after it fails. Implementation planning now includes an agent-deliverable compatibility check as a hard-gate.

Security hardening. Path containment validation on session state directories, symlink checks on hook state, fail-closed policy enforcement on shell commands, bounded stdin reads (1 MB cap), explicit file permissions, and filesystem path stripping from MCP error messages.

Deferred resource loading. Templates and references are loaded at the step where they're consumed instead of all at once during classification. Keeps the context window lean for the phases that matter.

What Maestro does (if you haven't seen it before):

You describe what you want to build. Maestro classifies the task complexity (simple/medium/complex), asks structured design questions, proposes architectural approaches with trade-offs, generates an implementation plan with dependency graphs, then delegates to specialized agents — coder, tester, architect, security engineer, data engineer, etc. — with parallel execution for independent phases.

Simple tasks get an Express workflow (1-2 questions, brief, single agent, code review, done). Complex tasks get the full Standard workflow with a design document, implementation plan, execution mode selection, and quality gates.

22 agents across 8 domains: Engineering, Product, Design, Content, SEO, Compliance, Internationalization, Analytics. Each agent has least-privilege tool access enforced via frontmatter — read-only agents can't run shell commands, shell-only agents can't write files.

Links:

Thank you all for your support with maestro!

It really is awesome to see people talking about how much it has improved their workflow. Be sure to give it a star to help get the word out!

It's always been a goal of mine to build something people actually use and enjoy so thank you very much for helping me reach that goal!

Next update, codex integration!

u/josstei — 29 minutes ago
Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News
▲ 21 r/ArtificialInteligence+9 crossposts

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 11 hours ago
▲ 10 r/GeminiAI+1 crossposts

Gemini Pro UI is a disgusting productivity disaster...

I watched loads of videos of Youtubers singing praises for Gemini, so I decided to give it a go.

I'm flabbergasted by the truth: Gemini is a hot mess, really hard to navigate and with a UI that makes the experience even worst (Sissie Hisao must have been totally drunk when she approved this).

After months using the PRO versione of Gemini, I am officially done making excuses for the state of its interface and its deteriorating logical agility. While the model’s raw cognitive capacity is arguably higher than ChatGPT in specific benchmarks, the actual user experience is an engineering failure of epic proportions.

Here is a technical breakdown of why this product is currently unusable for professional workflows:

1. The Sidebar is a Cognitive Graveyard

  • A. There is zero organization. No folders, no tags, and no categorization for "Gems."
  • B. Within a specific Gem, I can only see the 3 most recent chats. Everything else is dumped into a massive, chronological "Chat History" on the left, mixed with every other random query.
  • C. For a power user this "flat" UI design increases cognitive load by 100%. I shouldn't have to perform a manual lexical scan of 50 titles just to find a specific thread and I'm shocked that the team thought it was a great idea... I'm seriously speechless.

2. The Hallucination Loop & "The Restart Tax"

  • A. Gemini enters "hallucination loops" far more frequently than its competitors. Once it goes off the rails, it becomes "contextually poisoned."
  • B. It is nearly impossible to steer the model back once a loop starts. The only technical solution is to abandon the thread and start a new one.
  • C. Because I’m forced to restart chats constantly to fix the model's internal failures, the sidebar becomes even more cluttered, making the UI mess exponentially worse.

3. Logical Anchoring & Context Rigidity

  • A. Compared to ChatGPT, Gemini is significantly worse at predicting and connecting past and present information.
  • B. The model suffers from "Information Anchoring." If I provide a piece of data at the start of a chat and then modify or update that data later, Gemini remains anchored to the initial (now incorrect) information.
  • C. It fails to perform "dynamic context updates." It continues to reference the initial state of the conversation even when explicitly told that the parameters have changed. It’s as if the model’s "attention" is stuck in the past.

4. The "Paid Service" Paradox

  • I am paying for a premium service that treats my data like a temporary scratchpad rather than a structured knowledge base. The lack of basic UX features (like a way of categorise the chat at a glimpse) is an insult to paying customers.
reddit.com
u/Infinite-Country2577 — 9 hours ago

I asked Gemini Gems to be my "study buddy" and it started calling me "champ" – Google really outdid themselves

So I finally tried Google’s new Gemini Gems (their answer to Claude Projects). The idea is you can create custom "Gems" , specialized AI personas for different tasks. Sounds useful, right?

I decided to make a "Study Buddy" Gem for my finals. Gave it a nice prompt: "You are a patient, encouraging tutor. Help me understand complex topics. Be supportive but not cringey."

What I got instead:

  • It calls me "champ" every third sentence. "Great question, champ! Let's break down calculus..."
  • When I got an answer wrong, it said: "Oof, that's not it, bestie. Wanna try again?" – bestie?
  • I asked for a hint on a physics problem. It responded with a 3-paragraph pep talk about "the journey of learning" before actually giving the hint.
  • It randomly inserts "You've got this, future Nobel laureate!" , sir, I'm just trying to pass.

Meanwhile, my friend showed me Claude Projects – same concept but without the cringe. Claude just… helps. No "champ," no "bestie," no motivational posters in text form.

TL;DR: Gemini Gems = overenthusiastic high school guidance counselor. Claude Projects = calm, competent professor. Both work, but one made me laugh so hard I forgot what I was studying.

Has anyone else created a Gem that went completely off the rails? Share your stories. I need to know I'm not alone.

reddit.com
u/Remarkable-Dark2840 — 20 hours ago
Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 11 hours ago

Google just released Gemma 4

Gemma 4 just dropped with open weights, built on the same tech as Gemini 3. Covers everything from tiny edge models to a 31B flagship, plus multimodal support and agent-style workflows.

Anyone here planning to test it?

reddit.com
u/TeamAlphaBOLD — 20 hours ago

Suddenly Gemini's read aloud fiture started saying things those are not in the output.

I was useing gemini, and i was feelings lazy to read the output text, so i was using read aloud fiture, but remember that I was not useing live chart or something, suddenly after reading a certain portion of the output it said, "ok let me just repeat" and then started reading from beginning with me asking and even not finishing the whole text. 🤯 I know this should not happen, but how?

reddit.com
u/Peace-Seeker-0001 — 16 hours ago
Week