r/whaaat_ai

Six months ago every piece of content our agents produced at whaaat ai sounded like it came from a polite, slightly enthusiastic copywriter - but the tone was always similar. Technically correct but missing personality. We could swap our brand name for any competitor and nobody would notice the difference.

The fix took about two hours per person and now runs as a standard service we offer through our agency. Full disclosure: I work on the AI agent team at whaaat ai.

The process

You sit down with Claude (Opus works best for this, extended thinking on) and paste a single prompt that turns it into what I call a "Taste Interviewer." 100 questions across seven categories: core beliefs, writing mechanics, aesthetic crimes (things that make you physically cringe in other people's writing), voice and personality, structural preferences, hard nos and red flags.

The interviewer prompt has rules that matter. One question at a time. It pushes back on vague answers. If you say "I like to keep it simple," Claude will ask what "simple" means to you specifically, with examples of simple done well and simple done lazily. It flags contradictions from earlier answers. It follows interesting threads instead of marching through categories in order.

I dictate my answers instead of typing. Dictation is faster and more honest because you think less before responding. The whole thing takes 90 minutes dictated, closer to two hours typed.

What comes out is a raw document, 15,000 to 20,000 words. Your complete voice, unedited. Some of the questions do indeed feel more like a coaching session than a content exercise - so we warm people upfront. That part caught me off guard the first time - lol.

Compression

The final raw interview is way too large to use as context. 20,000 words loaded into every conversation burns tokens fast and costs real money if you're running this across multiple daily sessions.

So the important second prompt is a "Voice Compiler" that compresses the raw interview into a structured about-me.md file. The target is 2,000 to 4,000 tokens with a hard ceiling at 5,000. The compiler uses a single test for every line: "If this line disappeared, would the AI write, edit, judge or decide differently?" If yes, keep it. If no, cut it.

The output uses XML-style sections: identity context, voice fingerprint, writing laws, hard refusals, taste loves, taste disgusts, phrase bank, signature tells, decision rules and productive contradictions. Plus 3 to 6 examples in bad/good format that teach the AI your patterns.

The key distinction: compression is different from summarisation. Summarising loses nuance. Compressing keeps everything that changes AI behaviour and strips everything that just sounds nice about you.

What changed

Before the voice file, our content agents produced output that needed 30 to 40 minutes of editing per piece to sound like the person it was supposed to come from. After embedding the compressed file as standing context, editing dropped to under 10 minutes. Some pieces can be published without any editing.

The part I keep tweaking: the voice file drifts. Your opinions shift, your style evolves and new pet peeves develop. A file from six months ago makes the AI sound like you six months ago. So we've setup a monthly review now: 10 minutes, just reading through and updating what changed. Unfurtunately, we still haven't found a clean way to automate that review.

For anyone running Claude specifically: drop the about-me.md into your Cowork folder and it loads automatically in every session. You can also wrap it in a Skill that applies the voice to every writing task without manual setup. Both approaches work, the Skill route gives you more control over when the voice applies and when it stays quiet.

The full interview prompt and compiler prompt are both about 400 words each. Happy to share the German versions if anyone wants them (we built the original process in German, the English translation works identically). The prompts are the easy part. The hard part is answering 100 questions about yourself without defaulting to the version of yourself you think sounds good.

reddit.com
u/Ok_Today5649 — 7 days ago

Y Combinator published their Summer 2026 "Requests for Startups" list last week. Sixteen ideas they want to fund. One entry by Aaron Epstein has a line that I think frames the next five years of software: "The next wave of internet users will be AI agents, not humans."

I work on the AI agent team at whaaat ai, and this matched something we've been running into constantly. Every time we connect a new tool to our agent stack, the bottleneck is the same: software that was designed for someone looking at a screen and clicking buttons.

The wall agents hit

Think about what happens when you try to automate something in your business today. Your CRM has a beautiful dashboard with drag-and-drop pipelines. Your project management tool has kanban boards and color-coded labels. Your accounting software has dropdown menus and multi-step wizards. For a human, all of that is helpful. For an AI agent trying to move a lead from one stage to another, create a task or categorize an expense, every single one of those visual elements is irrelevant at best and an obstacle at worst. The agent needs an API endpoint, a documented data schema and predictable responses. Cookie banners, captchas, session timeouts and confirmation dialogs are walls. Epstein calls this "Making Something Agents Want," a riff on YC's classic "Make Something People Want." The argument: most software today works poorly for agents because nobody built it with agents in mind.

Where the opportunity sits

This is where it gets interesting for anyone building or thinking about building a business. The global software market is massive. CRMs, HR platforms, accounting tools, compliance systems, invoicing, scheduling, inventory management. Every single category was built for human users. Every single one now needs a version (or a layer) that agents can consume natively. The companies that bolt on agent support as an afterthought will struggle. An API added to a product designed around a visual workflow always feels like a translation layer. Clunky, incomplete, constantly breaking when the UI team ships changes that nobody told the API team about. The real opportunity is building agent-first from day one, where the API is the product and the human dashboard is one optional interface on top.

We run five agents across our operation. Builder handles code, Operator manages automations, Cockpit is the monitoring layer, Researcher does market and competitor scanning, Marketing handles content. All five communicate through MCP, an open protocol that standardizes how agents talk to tools. When we add a new integration, one config file and every agent that needs access has it immediately.

MCP is gaining traction specifically because of this problem. Instead of building custom integrations for every agent-tool combination, you build one MCP server and any agent that speaks the protocol can use it. For context, we connect to Gmail, Linear, Todoist, Stripe and GitHub this way. Setting up a new connection used to take a developer a day of custom API work. Now it takes a config file and 15 minutes.

Two paths I see

If you already run a software product: look at whether your product has an API that agents can use end-to-end without a human in the loop. Not "we have an API" in the marketing sense, but genuinely: can an agent complete a full workflow through your API alone? If the answer involves "and then the user clicks confirm in the UI," there's a gap.

If you're exploring what to build: pick any software category where the current tools are optimized for human interaction and build the agent-native version. Accounting that agents can query and categorize through. Project management that agents can update without navigating boards. CRM pipelines that agents can move deals through based on rules, not drag-and-drop. The catch I keep coming back to: agent-first software still needs to be inspectable by humans. The agents do the work, but a founder or operator needs to see what happened, catch mistakes and adjust rules. Building that inspection layer without falling back into "just build a dashboard" is the design challenge I haven't seen anyone solve cleanly yet. Our current approach is Live Artifacts in Cowork where I describe what I want to see and Claude builds it on the fly, but that only works for the person asking. If anyone has built a good pattern for multi-user visibility into agent-operated systems, I'd like to hear about it.

reddit.com
u/Ok_Today5649 — 9 days ago