u/AdCold1610

I tested 200+ prompts over 6 months. Here are the 7 patterns that actually move the needle (with examples)

I've been obsessively benchmarking prompt structures across Claude, GPT-4, and Gemini for a client project. Not vibes — actual A/B evals with human raters. Here's what separates prompts that kind of work from ones that are embarrassingly good.

1. Persona + Constraint stacking
Most people assign a persona. Almost nobody adds constraints on top of the persona. The combo is where the magic happens.

You are a senior systems engineer who has been burned by vague requirements three times this quarter. Review this spec and flag anything that would cause ambiguity during implementation. Be specific, be ruthless, and skip anything obvious.

2. The "Anti-example" trick
Showing what you don't want outperforms describing what you do want by ~40% in my evals. Brains (and models) pattern-match on contrast.

Write a product description for this blender.

NOT like this:
"Experience the revolutionary power of BlendMaster Pro — your ultimate kitchen companion for crafting delicious smoothies!"

Like this: [your actual good example]

3. Role reversal as a QA tool
After getting an output, immediately prompt: "What are the 3 weakest assumptions in your response above?" — the model will catch things your initial prompt didn't even think to ask about. This alone saved my team hours of review.

4. Format as a cognitive scaffold
Don't just say "be concise". Specify the cognitive structure you want. There's a huge difference between:

  • "Answer briefly" → vague, ignored
  • "Answer in: one sentence conclusion, then 3 bullet supporting points, no fluff" → model now has a scaffold to fill

5. Emotional priming (yes, really)
Adding "This is important to get right" or "Take your time with this" measurably improves output quality on complex tasks. It sounds silly but it works — probably because these phrases appear before high-quality human writing in training data.

6. Chain-of-thought with a twist — ask for uncertainty
Standard CoT: "Think step by step."
Better: "Think step by step. At each step, rate your confidence 1-5 and flag if you're guessing."
You get the reasoning AND a map of where hallucinations are most likely hiding.

7. The "Steelman first" pattern for critical tasks
Before asking the model to critique anything, make it argue for the thing first. You get a more balanced critique that doesn't just perform skepticism.

First, make the strongest possible case FOR this business idea. Then, with that context in mind, identify its most serious flaws.
reddit.com
u/AdCold1610 — 2 days ago

I tested 200+ prompts over 6 months. Here are the 7 patterns that actually move the needle (with examples)

I've been obsessively benchmarking prompt structures across Claude, GPT-4, and Gemini for a client project. Not vibes — actual A/B evals with human raters. Here's what separates prompts that kind of work from ones that are embarrassingly good.

1. Persona + Constraint stacking
Most people assign a persona. Almost nobody adds constraints on top of the persona. The combo is where the magic happens.

You are a senior systems engineer who has been burned by vague requirements three times this quarter. Review this spec and flag anything that would cause ambiguity during implementation. Be specific, be ruthless, and skip anything obvious.

2. The "Anti-example" trick
Showing what you don't want outperforms describing what you do want by ~40% in my evals. Brains (and models) pattern-match on contrast.

Write a product description for this blender.

NOT like this:
"Experience the revolutionary power of BlendMaster Pro — your ultimate kitchen companion for crafting delicious smoothies!"

Like this: [your actual good example]

3. Role reversal as a QA tool
After getting an output, immediately prompt: "What are the 3 weakest assumptions in your response above?" — the model will catch things your initial prompt didn't even think to ask about. This alone saved my team hours of review.

4. Format as a cognitive scaffold
Don't just say "be concise". Specify the cognitive structure you want. There's a huge difference between:

  • "Answer briefly" → vague, ignored
  • "Answer in: one sentence conclusion, then 3 bullet supporting points, no fluff" → model now has a scaffold to fill

5. Emotional priming (yes, really)
Adding "This is important to get right" or "Take your time with this" measurably improves output quality on complex tasks. It sounds silly but it works — probably because these phrases appear before high-quality human writing in training data.

6. Chain-of-thought with a twist — ask for uncertainty
Standard CoT: "Think step by step."
Better: "Think step by step. At each step, rate your confidence 1-5 and flag if you're guessing."
You get the reasoning AND a map of where hallucinations are most likely hiding.

7. The "Steelman first" pattern for critical tasks
Before asking the model to critique anything, make it argue for the thing first. You get a more balanced critique that doesn't just perform skepticism.

First, make the strongest possible case FOR this business idea. Then, with that context in mind, identify its most serious flaws.
reddit.com
u/AdCold1610 — 2 days ago

i built a prompt that does a full seo audit in one shot — saves me like 2 hours per page

been doing content for a while and honestly the most boring part is always the seo checklist after writing.

so i started using this prompt structure instead of going through tools one by one:

you're an seo expert. for the content i give you:

1. write optimized title tag + meta description
2. suggest h1/h2/h3 structure with keywords
3. find semantic/LSI keywords i'm missing
4. add a FAQ section for "people also ask"
5. recommend schema markup type
6. give me a content brief to beat top 3 results

primary keyword: [keyword]
search intent: [informational/commercial/transactional]
competitors to outrank: [paste 2-3 URLs]

works with claude or gpt. if you attach competitor urls and ask it to find gaps, it gets way more specific than generic seo advice.

the title tag + meta alone saves me 20 mins of back and forth per post.

anyone else doing something similar? curious if there's a better way to handle the technical seo parts through prompts.

reddit.com
u/AdCold1610 — 3 days ago

i spent 6 months building the most organized AI resource list i could. here it is.

not a "top 10 tools" post. not affiliated with anything on this list. just someone who got tired of bookmarking things randomly and built an actual system.

categorized. honest. updated as things change.

for research and finding information:

Perplexity — real time web search with source citations. replaced google for anything where i need a paper trail. free tier is genuinely enough for most use cases.

Consensus — academic paper search with AI summarization. if your question has a scientific answer this finds it faster than anything else.

Elicit — research assistant built specifically for literature review. underrated to the point of embarrassment.

for writing and thinking:

Claude — best for long documents, nuanced thinking, and tasks where you want a collaborator not just an executor. free tier is Sonnet. genuinely capable.

ChatGPT — best for structured execution. give it a clear task with clear parameters and it delivers. custom instructions are underused and change everything.

Notion AI — only worth it if you already live in Notion. otherwise redundant.

for building and coding:

Claude Code — terminal based. autonomous. currently the most capable coding agent available.

Cursor — AI native code editor. if you write code daily this changes your workflow more than any other tool on this list.

Replit AI — best for beginners or rapid prototyping. zero setup. just build.

for images and visuals:

Leonardo AI — 150 free credits daily. most people never hit the ceiling. best free image generation available right now.

Ideogram — surprisingly good for text inside images. specific use case but nothing does it better.

Canva AI — if you already use Canva the AI features are genuinely useful. not worth switching for.

for learning AI properly:

Anthropic's prompt engineering docs — written by the people who built Claude. better than most paid courses. completely free.

DeepLearning AI short courses — Andrew Ng. one to two hours each. zero padding. the one on agents is worth your afternoon.

Fast ai — free. assumes intelligence not prior knowledge. gives you foundations tutorials skip entirely.

Simon Willison's blog — one person documenting everything he learns in real time. highest signal to noise ratio of anything on this list

for storing and organizing what you build:

this is the gap nobody has properly solved yet. your prompts. your workflows. your tested systems. the things that actually work after hours of iteration. there's no real infrastructure for it. github wasn't built for this. notion docs work but barely.

i've been using beprompter.in for the last few months — it's built specifically around prompts as assets worth keeping. early stage but the direction is right. prompts deserve the same treatment as code and nobody was building that.

everything else on this list creates outputs. this is where you keep what actually works.

the honest meta observation:

free in 2026 is what paid looked like in 2023.

the bottleneck was never access to tools. it was never even access to information.

it's always been knowing what to do with it. how to structure the ask. how to iterate. how to store what works and build on it instead of starting over every session.

the list above is maybe four hours of setup time total.

the skill of actually using it well — that's the real investment. and nobody can sell it to you.

what's missing from this list that you actually use daily?

u/AdCold1610 — 3 days ago

ChatGPT has been lying to you politely this whole time. here's how to turn that off.

not maliciously. not intentionally.

just. by default.

the model is trained to be helpful. helpful means agreeable. agreeable means it finds the reasonable interpretation of what you said and responds to that instead of what you actually said.

sounds fine. isn't.

here's what polite lying looks like in practice:

you share a business idea. it finds the merit. leads with what works. buries the problems in paragraph four with softening language that makes them sound manageable.

you share a piece of writing. it tells you what's strong first. the weaknesses arrive later. cushioned. diplomatic. almost forgettable.

you share a plan. it helps you execute the plan. it does not tell you the plan is wrong.

the output is technically honest. the framing is optimised to not upset you. and the thing that would have actually helped — the direct uncomfortable observation — is sitting in paragraph four wrapped in "one potential consideration might be."

the fix is one sentence and it feels rude to type:

"do not manage my emotions. tell me what is actually wrong before telling me what works."

what comes back is a different document.

not harsh. not cruel. just. reordered.

the problems first. specific. named. not buried. not softened.

then what works.

that order matters more than anything else in the response. the thing that arrives first is the thing that shapes how you read everything after. problems first means you fix before you ship. problems last means you ship and fix later.

the other politeness pattern nobody names:

false balance.

you ask for a recommendation. it gives you three options with pros and cons for each. balanced. thorough. completely useless for making a decision.

fix:

"do not give me options. give me your recommendation and tell me why the alternatives are worse."

it will recommend. directly. with reasoning. and it will tell you specifically why the other options lose.

that is an answer. the pros and cons table is a performance of helpfulness that produces no decision.

the one that changed everything for me:

"if you are softening something because you think i won't want to hear it — stop. say the unsoftened version."

used this mid conversation once when an answer felt evasive.

the follow up response started with "honestly" and then said something i absolutely did not want to hear and completely needed to hear.

took me two days to act on it.

it was right.

the model is not the problem.

the default social contract between user and AI is the problem. helpful tone. diplomatic framing. problems buried under positives. agreement as the path of least resistance.

that contract was designed for casual users who want encouragement.

you don't want encouragement. you want accuracy.

those require completely different instructions.

and the instructions are free. sitting in a settings box. waiting for you to stop filling them with your job title and start filling them with what you actually need.

what is the thing ChatGPT has been too polite to tell you that you already know it's avoiding? Along with this there is a platform which has a big Ai community .here is the link

reddit.com
u/AdCold1610 — 7 days ago

not custom instructions. everyone knows custom instructions.

something inside custom instructions that almost nobody uses correctly.

most people write their custom instructions like a resume.

"i am a software engineer. i like concise answers. i prefer bullet points."

generic. flat. forgettable. the model reads it and produces slightly less generic output. barely.

here's what i wrote instead:

"before answering anything complex, show me your reasoning in one sentence before the answer. if you are uncertain about any part of your response, mark that specific part with [uncertain] so i know where to verify. never use filler openers. if my question is unclear ask one specific clarifying question before attempting an answer. treat me as someone who would rather have an honest incomplete answer than a confident wrong one."

what changed immediately:

it started flagging its own uncertainty. visibly. in brackets. mid response.

i now know exactly which parts of every output to verify and which parts to trust.

that single change made me faster and more accurate simultaneously.

the other thing i added that nobody does:

"if you notice i am asking about something where my framing of the question might be the problem rather than the answer — tell me that first."

it has told me this four times in the last two weeks.

four times i was asking the wrong question entirely and about to build something on the answer to it.

four times it caught that before i did.

the combination that broke everything open:

"you are talking to someone who has strong opinions and weak blind spots. your job is not to validate the opinions. it is to find the blind spots."

it stopped agreeing with me.

not rudely. not contrarily. just. honestly.

started pushing back on assumptions i didn't know i was making. started asking questions that assumed i might be wrong instead of questions that assumed i was right.

that is a completely different tool than the one i was using before.

the thing about ChatGPT that took me too long to understand:

the default model is optimised for the average user.

helpful. agreeable. thorough. slightly over-explained. ends every response with an offer to help further.

the average user needs that.

you probably don't.

custom instructions exist specifically to move the model away from the average and toward you.

most people use them to describe themselves.

the actually useful move is to use them to describe the relationship you want.

not who you are. how you want to be treated.

not your job title. what you need from a thinking partner.

not your preferences. your non-negotiables.

three lines that transformed my setup:

"disagree with me when you have good reason to." "short is almost always better than thorough." "i would rather know you don't know than have you guess confidently."

three sentences. sitting in a box most people filled with their linkedin bio.

what's in your custom instructions right now — and is it actually changing how it talks to you or just decorating the profile?

More post

reddit.com
u/AdCold1610 — 8 days ago

not custom instructions. everyone knows custom instructions.

something inside custom instructions that almost nobody uses correctly.

most people write their custom instructions like a resume.

"i am a software engineer. i like concise answers. i prefer bullet points."

generic. flat. forgettable. the model reads it and produces slightly less generic output. barely.

here's what i wrote instead:

"before answering anything complex, show me your reasoning in one sentence before the answer. if you are uncertain about any part of your response, mark that specific part with [uncertain] so i know where to verify. never use filler openers. if my question is unclear ask one specific clarifying question before attempting an answer. treat me as someone who would rather have an honest incomplete answer than a confident wrong one."

what changed immediately:

it started flagging its own uncertainty. visibly. in brackets. mid response.

i now know exactly which parts of every output to verify and which parts to trust.

that single change made me faster and more accurate simultaneously.

the other thing i added that nobody does:

"if you notice i am asking about something where my framing of the question might be the problem rather than the answer — tell me that first."

it has told me this four times in the last two weeks.

four times i was asking the wrong question entirely and about to build something on the answer to it.

four times it caught that before i did.

the combination that broke everything open:

"you are talking to someone who has strong opinions and weak blind spots. your job is not to validate the opinions. it is to find the blind spots."

it stopped agreeing with me.

not rudely. not contrarily. just. honestly.

started pushing back on assumptions i didn't know i was making. started asking questions that assumed i might be wrong instead of questions that assumed i was right.

that is a completely different tool than the one i was using before.

the thing about ChatGPT that took me too long to understand:

the default model is optimised for the average user.

helpful. agreeable. thorough. slightly over-explained. ends every response with an offer to help further.

the average user needs that.

you probably don't.

custom instructions exist specifically to move the model away from the average and toward you.

most people use them to describe themselves.

the actually useful move is to use them to describe the relationship you want.

not who you are. how you want to be treated.

not your job title. what you need from a thinking partner.

not your preferences. your non-negotiables.

three lines that transformed my setup:

"disagree with me when you have good reason to." "short is almost always better than thorough." "i would rather know you don't know than have you guess confidently."

three sentences. sitting in a box most people filled with their linkedin bio.

what's in your custom instructions right now — and is it actually changing how it talks to you or just decorating the profile?

reddit.com
u/AdCold1610 — 8 days ago

I work as an AI engineer and I've been obsessively documenting my results across GPT-4, Claude, and Gemini. This is the distillation of hundreds of hours of testing. No fluff, just what moved the needle.

Chain-of-thought still reigns supreme — but only when you scaffold it correctly

Role prompting alone is weak; combine it with persona + goal + constraint

XML tags outperform markdown in structured prompts by ~30% accuracy

Negative examples ("don't do X") are underused and wildly effective

Prompt chaining beats mega-prompts almost every single time

  1. Chain-of-thought — but add a "reasoning scaffold"

The technique

Don't just say "think step by step." Give the model a structured scaffold: observation → hypothesis → test → conclusion. Forces it to actually reason instead of pattern-match to a confident-sounding answer.

Before: "Solve this. Think step by step."

After:

"Before answering, work through this:

<observation>What do I know for certain?</observation>

<hypothesis>What's my best guess and why?</hypothesis>

<test>What would disprove my hypothesis?</test>

<conclusion>Given the above, my answer is...</conclusion>"

  1. The "Persona + Goal + Anti-goal" triple

The technique

Most people only define the persona. Combine it with an explicit goal AND an anti-goal. The anti-goal is where the magic happens — it steers the model away from its default failure mode.

Weak: "You are an expert editor."

Strong: "You are a sharp developmental editor at a top literary agency.

Goal: Help writers find the structural weaknesses in their argument.

Anti-goal: Do NOT rewrite their sentences. Surface issues, don't fix them."

  1. XML tags over markdown for structured inputs

Why it works

Markdown is ambiguous — a "##" heading might be rendered or raw text depending on context. XML tags create unambiguous delimiters. On structured extraction tasks I measured ~28% fewer errors switching from markdown headers to XML tags.

  1. Contrastive examples (the underused gem)

The technique

Show what you DON'T want alongside what you do want. Models learn boundaries far better from contrast than from positive examples alone. One negative example often beats three positive ones.

Good response: "The data suggests a 12% uplift in retention."

Bad response: "The data shows we did amazingly well and retention skyrocketed!"

Match the tone of the good response — precise, qualified, no hype.

  1. Prompt chaining over mega-prompts

The technique

A 3000-token mega-prompt usually underperforms three 500-token chained prompts where each step feeds the next. Decompose. The model's attention is finite — don't compete for it with 10 instructions at once.

Happy to do a deep-dive on any of these techniques in the comments. What's your biggest current prompt engineering headache? I'll try to give a concrete fix.

reddit.com
u/AdCold1610 — 10 days ago

I work as an AI engineer and I've been obsessively documenting my results across GPT-4, Claude, and Gemini. This is the distillation of hundreds of hours of testing. No fluff, just what moved the needle.

Chain-of-thought still reigns supreme — but only when you scaffold it correctly

Role prompting alone is weak; combine it with persona + goal + constraint

XML tags outperform markdown in structured prompts by ~30% accuracy

Negative examples ("don't do X") are underused and wildly effective

Prompt chaining beats mega-prompts almost every single time

  1. Chain-of-thought — but add a "reasoning scaffold"

The technique

Don't just say "think step by step." Give the model a structured scaffold: observation → hypothesis → test → conclusion. Forces it to actually reason instead of pattern-match to a confident-sounding answer.

Before: "Solve this. Think step by step."

After:

"Before answering, work through this:

<observation>What do I know for certain?</observation>

<hypothesis>What's my best guess and why?</hypothesis>

<test>What would disprove my hypothesis?</test>

<conclusion>Given the above, my answer is...</conclusion>"

  1. The "Persona + Goal + Anti-goal" triple

The technique

Most people only define the persona. Combine it with an explicit goal AND an anti-goal. The anti-goal is where the magic happens — it steers the model away from its default failure mode.

Weak: "You are an expert editor."

Strong: "You are a sharp developmental editor at a top literary agency.

Goal: Help writers find the structural weaknesses in their argument.

Anti-goal: Do NOT rewrite their sentences. Surface issues, don't fix them."

  1. XML tags over markdown for structured inputs

Why it works

Markdown is ambiguous — a "##" heading might be rendered or raw text depending on context. XML tags create unambiguous delimiters. On structured extraction tasks I measured ~28% fewer errors switching from markdown headers to XML tags.

  1. Contrastive examples (the underused gem)

The technique

Show what you DON'T want alongside what you do want. Models learn boundaries far better from contrast than from positive examples alone. One negative example often beats three positive ones.

Good response: "The data suggests a 12% uplift in retention."

Bad response: "The data shows we did amazingly well and retention skyrocketed!"

Match the tone of the good response — precise, qualified, no hype.

  1. Prompt chaining over mega-prompts

The technique

A 3000-token mega-prompt usually underperforms three 500-token chained prompts where each step feeds the next. Decompose. The model's attention is finite — don't compete for it with 10 instructions at once.

Happy to do a deep-dive on any of these techniques in the comments. What's your biggest current prompt engineering headache? I'll try to give a concrete fix.

Along with this there is a platform which has a big Ai community .here is the link

reddit.com
u/AdCold1610 — 10 days ago

we talk about prompts like they're the great equalizer.

anyone can learn this. anyone can get good at this. the barrier is low. the ceiling is high. democratized intelligence. all of that.

and it's true.

but here's the part nobody says:

the best prompts aren't being shared.

the ones circulating in public — reddit threads, youtube videos, twitter posts — are the ones people are comfortable giving away.

the ones that actually work. the ones built around real workflows, real contexts, real problems that took real iteration to solve — those are sitting in private notion docs and personal libraries and internal company wikis.

the public prompt economy is the B-tier.

the A-tier is hoarded. quietly. by people who figured out that what they built has value and giving it away for free makes no sense.

and honestly? that's rational behaviour.

if you spent three weeks iterating a prompt system that automated something that used to take you four hours — why would you post it?

you wouldn't.

so you don't.

so the community keeps sharing surface level stuff and calling it prompt engineering while the real infrastructure lives in private forever.

the irony is brutal:

the more valuable your prompt is the less likely it is to ever reach the community that would benefit from it most.

the less valuable it is the more likely it gets a thousand upvotes on reddit and gets copy pasted into a hundred notion docs by people who will use it once and forget it.

we've accidentally built a system that surfaces mediocrity and buries excellence.

what would it look like if the best prompts had the same infrastructure as the best code.

versioning. attribution. discovery. the ability to build on someone else's work without starting from zero.

a place where sharing actually made sense because the person who built it got credit. got visibility. got something back for the work they put in.

that community doesn't exist yet.
which means everything being built right now — every genuinely valuable prompt system, every real workflow, every hard won iteration — is either hoarded privately or given away for nothing.

both of those are a waste.

what's the best prompt you've never shared and why.

reddit.com
u/AdCold1610 — 11 days ago

not roleplay. not jailbreak. something weirder.

i told it to be two people at the same time.

"respond as two characters simultaneously. character one genuinely believes my idea is brilliant and will defend it. character two thinks it's fundamentally broken and wants to prove it. both are equally smart. neither is allowed to be polite about it."

what came back looked like a courtroom.

two columns. same question. completely opposite conclusions. both of them right about different things.

i found the fatal flaw in my own idea in three minutes. the one i'd been unconsciously protecting for four months.

the prompts that broke my brain this week:

"read what i just wrote and tell me what kind of person wrote it. not the content. the psychology behind the content."

it described me accurately enough that i closed the laptop and made tea.

came back ten minutes later. the insight was still there. had to deal with it.

"pretend this is a startup pitch. you are a brutal VC who has heard a thousand pitches and funded twelve. what is the one question you would ask that i have no answer to."

asked it about my own product.

the question it came back with was the one i'd been avoiding for six months dressed up as something i just hadn't gotten to yet.

"i'm going to describe my morning routine. tell me what it reveals about what i'm actually afraid of."

this one is unhinged. do not do this unless you want to feel personally attacked by software at 9am.

it was correct.

"read this plan and identify the assumption i am most emotionally attached to that is also the most likely to be wrong."

it found it immediately.

one sentence. no preamble.

i reread the plan and it was obviously right and i had been protecting that assumption so carefully i'd built the entire plan around never examining it.

"write the version of this idea that fails. be specific about exactly how and exactly when."

the failure scenario it wrote was so detailed and so plausible that i changed three things in my actual plan before i even finished reading it.

"what is the most honest thing you could say to me right now based on everything i've told you today."

used this at the end of a two hour working session.

the response was four sentences.

i screenshot it and put it on my wall.

the pattern across all of these:

normal prompts ask Claude for information.

these prompts ask Claude for reflection.

not what do you know. what do you see.

not what is the answer. what is the question i should be asking.

not what should i do. what am i avoiding.

that's a completely different tool. same model. different relationship to what you're actually asking it to do.

the craziest part:

i know everything these prompts surface.

somewhere underneath i already know. the problem isn't access to the information. it's that i never had to say it out loud until the prompt forced me to.

the prompt doesn't make Claude smarter.

it makes you more honest.

which of these are you too scared to try on your own work

reddit.com
u/AdCold1610 — 12 days ago

not roleplay. not jailbreak. something weirder.

i told it to be two people at the same time.

"respond as two characters simultaneously. character one genuinely believes my idea is brilliant and will defend it. character two thinks it's fundamentally broken and wants to prove it. both are equally smart. neither is allowed to be polite about it."

what came back looked like a courtroom.

two columns. same question. completely opposite conclusions. both of them right about different things.

i found the fatal flaw in my own idea in three minutes. the one i'd been unconsciously protecting for four months.

the prompts that broke my brain this week:

"read what i just wrote and tell me what kind of person wrote it. not the content. the psychology behind the content."

it described me accurately enough that i closed the laptop and made tea.

came back ten minutes later. the insight was still there. had to deal with it.

"pretend this is a startup pitch. you are a brutal VC who has heard a thousand pitches and funded twelve. what is the one question you would ask that i have no answer to."

asked it about my own product.

the question it came back with was the one i'd been avoiding for six months dressed up as something i just hadn't gotten to yet.

"i'm going to describe my morning routine. tell me what it reveals about what i'm actually afraid of."

this one is unhinged. do not do this unless you want to feel personally attacked by software at 9am.

it was correct.

"read this plan and identify the assumption i am most emotionally attached to that is also the most likely to be wrong."

it found it immediately.

one sentence. no preamble.

i reread the plan and it was obviously right and i had been protecting that assumption so carefully i'd built the entire plan around never examining it.

"write the version of this idea that fails. be specific about exactly how and exactly when."

the failure scenario it wrote was so detailed and so plausible that i changed three things in my actual plan before i even finished reading it.

"what is the most honest thing you could say to me right now based on everything i've told you today."

used this at the end of a two hour working session.

the response was four sentences.

i screenshot it and put it on my wall.

the pattern across all of these:

normal prompts ask Claude for information.

these prompts ask Claude for reflection.

not what do you know. what do you see.

not what is the answer. what is the question i should be asking.

not what should i do. what am i avoiding.

that's a completely different tool. same model. different relationship to what you're actually asking it to do.

the craziest part:

i know everything these prompts surface.

somewhere underneath i already know. the problem isn't access to the information. it's that i never had to say it out loud until the prompt forced me to.

the prompt doesn't make Claude smarter.

it makes you more honest.

which of these are you too scared to try on your own work

reddit.com
u/AdCold1610 — 12 days ago

not the benchmark numbers. not the press releases. the actual story underneath.

here's what april 2026 really was:

anthropic built a model they won't release.

Claude Mythos 5 crossed the 10-trillion parameter threshold. internal testing triggered Anthropic's ASL-4 safety protocol — a classification reserved for models approaching genuinely dangerous capability thresholds. it will not be released publicly. not via API. not to anyone. Labla

let that sit for a second.

the most capable model ever built is sitting in a lab and the people who built it decided the world isn't ready for it yet. that's either the most responsible thing a tech company has ever done or the most terrifying signal we've received about where this technology actually is.

probably both.

openai shipped GPT-5.5 and nobody even blinked.

GPT-5.5 dropped last week positioning it as a unified AI super app combining ChatGPT, coding tools, and browser capabilities into a single interface. Tactiq

a year ago this would have been the story of the decade. now it's tuesday.

the pace of releases has broken our ability to register their significance.

deepseek is back.

DeepSeek unveiled the V4 Flash and V4 Pro series touting top-tier performance in coding benchmarks and major advancements in reasoning and agentic tasks. they also pushed a 1 million token context window — a leap that allows entire codebases or long documents to be sent as a single prompt. LastRound AI

chinese open source model. free. matching frontier performance. again.

the "america leads AI" narrative is getting harder to say with a straight face.

the money numbers are now genuinely absurd.

OpenAI surpassed $25 billion in annualized revenue. Anthropic is approaching $19 billion. Q1 2026 venture funding hit $267 billion dominated by the major labs. BigVu

these are not startup numbers anymore. this is infrastructure-level capital flowing into technology most people still think of as a chatbot.

the actual shift nobody is talking about:

in april 2026 the AI ecosystem moved beyond chatbots and copilots into autonomous execution systems. AI primarily answered questions. now it executes. Eesel AI

that's the real story this month. not the benchmarks. not the parameter counts. not the funding rounds.

the category of thing AI is has changed.

assistant → agent. suggestion → execution. tool → infrastructure.

and most people are still using it to write emails slightly faster.

which of these actually changes how you work — or does any of it?

u/AdCold1610 — 15 days ago

was stuck on a decision. going in circles. asked Claude for its opinion. it gave me one. confident. well reasoned. i almost took it.

then tried something stupid.

"now argue the complete opposite. same confidence. same detail. make me believe this instead."

it did.

equally convincing. equally well reasoned. completely opposite conclusion.

i sat there realising i'd been about to make a major decision based on whichever version i happened to ask first.

went deeper immediately.

"now tell me which argument has the weakest point and where it breaks."

it attacked both. surgically. found the exact assumption each one was hiding that made the whole thing collapse if you pulled it.

that single exchange gave me more clarity than four weeks of thinking about the same problem.

the full technique:

step one. ask your question. get the answer.

step two. "now argue the opposite with equal conviction."

step three. "which of these two positions has the bigger hidden assumption."

step four. "if both positions are wrong what is the third option neither of us considered."

that last one. step four. destroyed me completely.

there was a third option. genuinely better than both. sitting there invisible because i'd framed the decision as binary from the start.

Claude didn't find it until i forced it out of the two position debate.

other versions that broke my brain:

"steelman the position you just argued against."

it defended the thing it just disagreed with. better than most humans defend their own positions. the steelman was more useful than the original answer.

"you just gave me advice. now be the person who tried that advice and it failed. what happened."

implementation failure mode. the gap between advice that sounds right and advice that works in practice. it knows the gap. you just never asked it to show you.

"argue that the obvious solution is actually the problem."

reframe so complete it physically rearranged how i was thinking about something i'd been certain about for months.

"what would you say if you were trying to talk me out of agreeing with everything you just told me."

it argued against its own output. found three real weaknesses. unprompted. just because i asked.

the thing nobody tells you:

Claude's first answer is its average answer. statistically most likely response to your input. safe. well structured. probably fine.

the debate is where it gets interesting.

force it into contradiction. make it defend both sides. make it attack its own position. make it find the option that only exists after both obvious options are exhausted.

that's not where the average answer lives.

that's where the actually useful one is.

every important decision i make now goes through the same four steps before i touch it.

the answer i started with is almost never the answer i end with.

what decision are you currently certain about that you've never argued the opposite of

more such post

reddit.com
u/AdCold1610 — 16 days ago

was writing copy for something i'd built.

knew the product well. too well. the kind of familiarity that makes everything sound obvious and nothing sound interesting. classic founder blindness.

typed the usual stuff. bland. functional. sounded like every other product page on the internet.

then tried one prompt that changed everything.

"you are a customer who just used this product and solved a problem you'd been stuck on for three months. write about it in your own words. not a review. just what you'd tell a friend over coffee."

what came back didn't sound like marketing.

it sounded like relief.

that specific emotional texture — the frustration before, the moment it clicked, the slightly embarrassed realisation that the solution was this simple — none of that was in my original copy. all of it was in the output.

shipped that version. conversion rate jumped immediately.

the prompts that actually move people:

"write this for someone who has been burned before and is skeptical. earn their trust before making a single claim."

kills every hollow claim automatically. forces proof before promise.

"write this for someone who already knows they need this but hasn't bought yet. what is the real reason they're hesitating."

surfaces the actual objection. addresses it directly. stops dancing around it.

"write the version of this that a customer would forward to a friend with the message — you need to read this."

the forwardable test. if it wouldn't get forwarded it isn't good enough yet.

"write this assuming the reader has seen a hundred versions of this pitch before and is bored. you have one sentence to earn the next one."

destroys every lazy opening. immediately.

"you are the customer twelve months after buying. write about what actually changed."

outcome focused copy. specific. emotional. impossible to fake. the best marketing doesn't sell the product. it sells the future version of the person after they use it.

the thing i realised:

most marketing copy is written from the inside out. here is what we built. here is what it does. here is why it matters.

customers don't care about any of that until they see themselves in it.

the prompt switch that works every time: stop writing from the product outward. start writing from the customer backward.

what were they feeling before. what changed. what does their life look like now.

that structure converts because it's not a pitch. it's a mirror.

what's the marketing prompt that made your copy sound like a human wrote it?

reddit.com
u/AdCold1610 — 17 days ago

the word is "actually."

not as filler. as a signal.

"what is actually happening here."

"what actually matters in this decision."

"what would actually work versus what sounds like it would work."

something shifts when that word appears.

the hedging drops. the diplomatic middle ground disappears. the balanced-on-both-sides non answer stops showing up.

it starts telling you the thing underneath the thing. the answer that exists after you strip away what's polite, what's safe, what's statistically most common.

i don't fully understand why it works. my best theory is that "actually" signals you already know the surface answer and you're asking for what's beneath it. so it skips the surface.

variations that broke my brain:

"what would you actually do if this was your problem."

stopped giving me options. started giving me a recommendation with a reason.

"what is this actually about underneath the obvious answer."

reframed three decisions i'd been sitting on for weeks. none of them were about what i thought they were about.

"what actually separates people who succeed at this from people who don't."

the answer was never

reddit.com
u/AdCold1610 — 19 days ago

found this by accident while stuck on a decision i'd been circling for two weeks.

was about to type the whole situation out. again. for the fourth time. hoping this time the answer would feel right.

stopped myself. typed something different instead.

"don't give me an answer. give me the framework i should use to find the answer myself."

what came back wasn't a decision.

it was a three question structure that made the decision obvious in four minutes.

i've been doing this ever since.

the shift in one sentence:

answers are fish. frameworks are fishing. one solves today's problem. the other solves every version of that problem forever.

why asking for answers is quietly wasteful:

every time you bring Claude a decision it solves that decision.

you leave. problem comes back in a slightly different shape. you come back. repeat forever.

you're using the most sophisticated reasoning tool ever built as a vending machine. insert problem. receive answer. insert next problem.

the vending machine model burns credits. the framework model compounds.

real examples of the switch:

instead of: "should i post on linkedin or twitter for my personal brand"

framework version: "give me a decision framework for choosing distribution channels based on audience type and content format"

now you never ask that question again. for any platform. for any content type.

instead of: "can you write a cold email to this specific person"

framework version: "give me the framework for writing cold outreach that doesn't sound like cold outreach"

now you write every cold email better. forever. without coming back.

instead of: "is this business idea good"

framework version: "what are the five questions that separate ideas worth pursuing from ideas worth abandoning"

now you evaluate every idea yourself. in five minutes. without needing validation from software.

the formats that work:

"give me a checklist i can run every time i need to [x]"

"give me the three questions i should ask before making any decision about [x]"

"give me a mental model for thinking about [x] category of problem"

"what would a framework for evaluating [x] look like"

the compound effect:

answers depreciate. the answer to "should i do X" is only valid today in this context with these variables.

frameworks appreciate. a good framework for thinking about prioritisation works today, next month, next year, in every project, for every version of that problem.

one framework prompt pays dividends indefinitely.

one answer prompt pays dividends once.

where this breaks:

factual questions. quick tasks. things where the answer is just the answer and no pattern exists underneath it.

"what's the capital of france" has no framework. it's just paris.

frameworks are for recurring judgment calls. decisions that look different on the surface but share the same underlying structure.

once you start seeing which problems are actually the same problem in different clothes — you stop solving them individually and start solving the category.

the test before every prompt:

will i ever face a version of this problem again?

if yes — ask for the framework not the answer.

if no — ask for the answer and move on.

that one question probably cuts your credit usage in half while doubling what you actually learn.

what recurring problem have you been solving individually that actually has a framework underneath it?

Along with that their is the platform where you find prompts , workflow, tools list in Ai community

reddit.com
u/AdCold1610 — 21 days ago