u/rjboogey

I structured a prompt using the RACE framework and it blew up on r/ClaudeAI today. Here's the framework breakdown and the free app I built around it.

Earlier today I posted a prompt called "Think Bigger" on r/ClaudeAI and r/ChatGPT and it's a strategic business assessment prompt that I reverse-engineered from a real Claude vs ChatGPT comparison I did for a friend.

What got the most questions wasn't the prompt itself but it was about the structure. People kept asking about the RACE labels I used (Role, Action, Context, Expectation) and why structuring it that way made a difference.

So I figured I'd do a proper breakdown here since this sub actually cares about the engineering side.

The RACE Framework:

Role — This isn't just "act as an expert." It's defining the specific lens the model should use. In the Think Bigger prompt, the role includes "20+ years advising founders" and "specializing in identifying blind spots." That level of specificity changes the entire output tone from generic consultant to someone who's seen real patterns.

Action — One clear directive verb. "Conduct a comprehensive strategic assessment" not "help me think about my business." The action should be something you could hand to a human and they'd know exactly what deliverable you expect.

Context — This is where 90% of prompt quality comes from. The Think Bigger prompt has 10 fill-in fields: business/role, revenue stage, industry, biggest challenge, what you've tried, team size, time horizon, risk tolerance, resources, and what "thinking bigger" means. Each one narrows the output. Remove any of them and the quality drops noticeably.

Expectation — The output spec. Think Bigger asks for 8 specific sections: Honest Diagnosis, Market Position Audit, Three Bold Growth Levers, the "10x Question," 90-Day Momentum Plan, Resource Optimization, Risk/Reward Matrix, and The One Thing. Without this, the model decides what to give you. With it, you get exactly what you need.

Why this works across models: The structure isn't model-specific. I've tested it on Claude, ChatGPT, and Gemini. Claude gives you harder truths. ChatGPT gives more options. But the framework produces good output on all of them because you're solving the real problem — giving the model enough structured context to work with.

The app: I actually built a tool around this framework called RACEprompt. You describe what you need in plain language, it asks 3-4 smart clarifying questions, then generates a full RACE-structured prompt automatically. It also has 75+ pre-built templates (including Think Bigger) that you can customize and run directly with AI.

Free tier gives you unlimited prompt building + 3 AI executions per day. Available on iOS and web at app.drjonesy.com. Currently in beta for Android, and MacOS is under review.

The framework itself not the app is the most valuable part. If you just learn to think in Role/Action/Context/Expectation, your prompts improve immediately without any tool.

Here's the Think Bigger prompt if you want to try it: https://www.reddit.com/r/ClaudeAI/comments/1sbm4li/i_used_claude_to_tear_apart_a_chatgptgenerated/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

What frameworks or structures are other people here using? I'm always looking to refine the approach.

reddit.com
u/rjboogey — 13 hours ago