r/BuildWithClaude

▲ 2 r/BuildWithClaude+1 crossposts

Most people use subagents wrong. Here's what actually works.

Subagents are one of the most powerful features in Claude Code and most people either ignore them or use them like a worse version of asking Claude directly. I use them on probably 60% of my sessions now, and the difference is night and day — but it took me a while to figure out why some attempts flopped and others saved me 20 minutes.

Three mistakes I kept making

>1. Treating subagents like a second Claude chat. I'd spin one up and give it the same vague prompt I'd give my main session — "look at this codebase and tell me what's going on." That's just burning tokens on unfocused exploration. A subagent without a specific mission comes back with a wall of text you still have to synthesize yourself.

>2. Running them sequentially when they could run in parallel. I was launching one agent, waiting for it to finish, reading the result, then launching the next one. That's literally the same as doing it in your main thread — you're just adding overhead. The whole point is parallelism.

>3. Dumping the results into my main context. I'd have an agent explore 40 files and then paste the full output back into my conversation. Now my context window is stuffed with code I only needed once, and Claude starts losing my earlier requirements.

What actually works

1. Brief them like a colleague, not a chatbot. "Search the data layer for how we handle campaign state — specifically look for persistence patterns and any sync logic. Report back the file paths and the pattern, not the full code." Specific mission, specific output format. The agent comes back with 10 useful lines instead of 200 lines of noise.

2. Launch 2-3 agents in parallel for multi-area tasks. Planning a feature that touches the UI, the data layer, and an API? Three explore agents, one per area, all running simultaneously. What takes 15 minutes sequentially finishes in 3. Each one returns a focused summary. My main window gets three clean reports I can actually make decisions from.

3. Use them for independent review. After I finish a feature, I spin up a review agent with a fresh context. It hasn't seen my thought process, my wrong turns, or my justifications. It just sees the diff. That independence catches things I'd never notice reviewing my own work — missing edge cases, inconsistent patterns, subtle bugs that look fine when you've been staring at them for an hour.

The real unlock

Subagents aren't about doing more work — they're about keeping your main conversation coherent. Every file Claude reads, every search result, every code block fills the context window. When it fills up, Claude loses your earlier context. Your requirements from the beginning of the session just quietly disappear.

The pattern that changed everything for me: plan in the main thread, delegate exploration to subagents, synthesize their reports myself, then execute. The main window never sees raw exploration output. It stays clean, focused, and remembers what I told it 45 minutes ago.

reddit.com
u/Ok_Industry_5555 — 1 day ago
▲ 3 r/BuildWithClaude+1 crossposts

I've been using hooks for months. I was only using 2 of the 9 types.

I started building hooks early — a duplicate code checker that greps after every edit, a pre-session gate that blocks Claude from writing code until it reads my lessons file. Both run automatically, no prompting needed, zero API cost. They've been in my setup since March.

I thought that was the whole feature. Then I started studying for Anthropic's certification and realized I was barely scratching the surface.

The basics (what most people know)

Hooks are shell scripts that run automatically when Claude does something. They live in `settings.json`, not in your CLAUDE.md. The difference matters: CLAUDE.md is instructions Claude *chooses* to follow. Hooks are guardrails that *execute whether Claude likes it or not.*

The two I was already running:

  1. Pre-session gate.

Before Claude can write a single line of code, a hook checks whether it read my lessons file. If it didn't, the edit gets blocked. I built this after Claude skipped mandatory reads and repeated a mistake that cost me an entire day. Now it physically can't skip them.

  1. Duplicate code check.

After every file edit, a hook greps the project for functions with the same name. Catches the case where Claude writes a helper that already exists somewhere else in the codebase. Simple grep, runs in milliseconds.

What I was missing

There are 9 hook types. I was only using PreToolUse and PostToolUse — the ones that fire before and after Claude uses a tool. But there's also SessionStart, SessionEnd, SubagentStop, UserPromptSubmit, and more. Each one receives different data as input.

SessionStart and /Primer alone opens up a lot. I now have a hook that auto-loads project context the moment a session begins — no manual prompting, no "hey Claude, read the project files first." It just happens.

The other thing I hadn't considered: lightweight hooks vs AI-powered hooks. My grep-based checker is lightweight — fast, free, runs on every edit. But you can also build hooks that use the Agent SDK to have Claude review Claude's own output before it ships. That's slower and costs tokens, but catches semantic problems a grep never would. Two different weight classes for two different jobs.

The layer most people skip

CLAUDE.md tells Claude what to do. Memory tells Claude what to remember. Hooks tell Claude what it *can't get away with.* They're the enforcement layer — the thing that turns guidelines into guardrails.

I had the first two dialed in for months before I realized the third one had 9 entry points I was mostly ignoring. The conversations got better once I stopped relying on Claude to police itself and let the hooks handle it.

reddit.com
u/Ok_Industry_5555 — 2 days ago

Your favorite uses for the 5 daily included routine runs?

Not sure what to use the "routine runs" feature for, but wanted to take advantage of it.

What are your favorite uses for it?

What does it help you the most with?

reddit.com
u/QuestionAsker2030 — 22 hours ago
▲ 4 r/BuildWithClaude+1 crossposts

I use Gemini Pro for polishing sentences. I use Claude Code for building everything else.

I've been building production apps with Claude Code in the terminal for months. Before that I tried Gemini Pro for coding tasks. I still use Gemini — but only for when working within their line of products.

Gemini is good at refining language and scripts. If I need a sentence rewritten, a paragraph tightened, or marketing copy polished, it does that well. I actually use it as a fallback provider in one of my local tools for exactly that reason.

But for actual building — writing code, debugging, working across files — I can't get accuracy out of it. Here's what keeps happening:

**It spins on the same mistake.** I'll point out an error. It apologizes, generates a "fix" that has the same fundamental problem, just rearranged. I correct it again. Same thing back, slightly different wording. Three rounds in and I'm looking at the same bug with a new coat of paint.

**It assumes instead of verifying.** I'll ask it to fix a function and it'll rewrite the whole thing based on what it thinks the code looks like — without reading the file first. Claude Code reads the actual file. It greps for the function. It checks what calls it. Then it makes a targeted edit. Gemini gives you a confident answer based on a guess about what your codebase contains.

**It doesn't hold context the same way.** In Claude Code, I have a CLAUDE.md that loads project context, a memory file that persists across sessions, and rules files per project. Claude reads all of that before it touches anything. Every session starts informed. With Gemini I was re-explaining my project structure every conversation. There's no equivalent of "read this file at startup and follow these rules."

**It doesn't know when to stop.** Claude Code will make a targeted two-line fix and move on. Gemini rewrites the surrounding function, adds comments I didn't ask for, changes variable names for "clarity," and introduces new bugs in code I never asked it to touch. I spent more time reverting Gemini's helpful additions than I did on the actual fix.

I'm not saying Gemini is bad. For writing, summarization, and research it's solid. But for the terminal-based, file-aware, rule-following workflow I depend on — where the AI needs to read before it writes and verify before it claims — Claude Code is the only thing that works for me.

>"Gemini could try to send me a virus. It'd get lazy halfway through, assume my OS without checking, and rewrite the payload three times with the same bug and add comments you didn't ask for."

The terminal is where the real work happens. And Claude Code is the only AI tool I've found that treats my codebase like it actually exists, instead of imagining what it might look like.

u/Ok_Industry_5555 — 8 days ago