u/macebooks

I changed one prompt habit and it completely changed how I use ChatGPT

I had a small realization recently while using ChatGPT.

I used to treat it like this:

“Give me the answer”
→ take it → move on

It made me faster, but I was not really improving at anything.

Then I changed one habit.

Instead of asking for answers, I started asking things like:

  • “Where could this be wrong?”
  • “What assumptions are you making?”
  • “Argue against this”

For example, I had it summarize something for me that sounded completely correct at first. When I asked it to critique its own answer, it pointed out a missing detail I would not have caught.

That was the shift.

Now it feels less like a tool that gives answers and more like something that helps me think through things.

It slowed me down slightly, but the quality difference is noticeable.

Curious if others here do something similar, or if you have prompts that changed how you use it.

reddit.com
u/macebooks — 1 day ago
🔥 Hot ▲ 52 r/ChatGPTPro

5 assumptions about AI productivity I've had to rethink after 18 months

I've been using ChatGPT (and Claude, and a few other tools) pretty much every workday for about a year and a half now. Mostly for knowledge work, research, drafting, analysis, strategy docs.

Somewhere around the 12-month mark I started noticing that my relationship with the tools had shifted in ways I didn't consciously choose. Not in a dramatic way. More like I'd absorbed a set of assumptions about how AI fits into work, and when I actually examined them, a few of them were... wrong? Or at least way more complicated than I'd assumed.

I want to share the five because I'm genuinely curious whether other people have hit the same things or if this is just me.

1. "AI saves me time."

This was the big one. I realized AI wasn't actually saving me time, it was shifting where my time went. Before AI, writing a strategy memo was maybe 70% writing/thinking, 20% research, 10% formatting. The writing was where I figured out what I actually believed.

After AI, the research and drafting happen almost instantly. So in theory I have all this freed-up time. In practice? For months I just did more stuff, faster. More memos. More emails. Higher volume. The thinking time didn't get reinvested into deeper thinking, it just evaporated.

I looked back at work I did a year ago and it was genuinely sharper than what I was producing with AI. That was a weird realization.

2. "More AI = more productive."

I think the actual relationship is more like an inverted U. At low-to- medium usage, AI gives you real leverage. You use it for specific things where it clearly helps. But past a certain point - and I think I crossed it, you start outsourcing cognitive work that was actually keeping you sharp. Writing a first draft from scratch forces you to organize your thinking. Reading a full doc forces you to notice things a summary misses. When you hand those tasks to AI, you lose the cognitive byproducts, and those byproducts were often more valuable than the task itself.

3. "AI does what I tell it."

This is the one that messed with me the most. Technically true, but it misses something important: when AI generates a draft, it makes hundreds of small framing decisions, which points to emphasize, which structure, which examples. Then I edit within that frame. I'm not really directing. I'm reacting within boundaries the AI set.

I tested this by occasionally writing important pieces with no AI draft at all - just a blank page. They went in noticeably different directions. Not always better. But different in ways the AI version never would have gone. Those differences are mine and I think they matter, but I was losing them without noticing.

4. "I can tell when the output is wrong."

I can catch the obvious errors, outdated facts, wrong context, things that clash with stuff I know well. Those are easy.

What I can't reliably catch are the subtle errors: slightly skewed framing that leads to a different conclusion than the evidence supports, a comparison that omits the most relevant option because the model didn't know about it, an argument that sounds airtight but rests on an assumption that doesn't hold in my specific case.

These errors are invisible precisely because they live in the gap between what I know and what I think I know. The AI presents them confidently, they pattern-match to things that seem right, and because I'm reading as an editor (does this sound right?) rather than a researcher (is this actually right?), they sail through.

My most expensive AI mistakes were never the obviously broken outputs. They were the 95% correct ones where the other 5% was wrong in a way I wasn't equipped to notice.

5. "AI makes juniors as effective as seniors."

I hear this one a lot from managers and I think it's wrong in an important way. AI closes the output gap, a junior with AI can produce a memo that looks almost identical to a senior's work. But it doesn't close the judgment gap. The senior reads the AI draft and notices what's missing because they've lived through the situations the draft references. The junior reads it and sees no flaws.

The part that worries me: juniors become seniors by doing the work badly first, learning from the friction, and slowly building judgment. If AI smooths away that friction, the learning never happens. You get people who can produce polished work on any topic and have deep understanding of none.

I want to be clear, I haven't stopped using AI. I use it every day and I think it's genuinely powerful. But I've adjusted how I use it based on realizing these beliefs were steering me wrong.

The big shift: I've started treating AI less like a production tool and more like a sparring partner. I use it to challenge my thinking more than to produce my output. And I deliberately do some work without it - not because I'm anti-AI, but because I noticed what I was losing when everything went through the model first.

Could be totally wrong about some of these. Has anyone else hit similar realizations after extended daily use? Or gone the other direction, found that heavier use actually made you better, not worse? Genuinely curious.

reddit.com
u/macebooks — 2 days ago

LPT: Before accepting a meeting invite, multiply the number of attendees by the average hourly rate by the meeting length. If the cost exceeds the value of the decision being made, decline it.

Simple formula: Attendees × Hourly rate × Hours = Meeting cost

Example: 8-person meeting, $50/hr average, 1 hour = $400

If the meeting doesn't have a clear decision worth $400+ to make, it should be an email, a Slack thread, or a shared doc.

This one reframe completely changed how I evaluate my calendar. I decline about 30% more meetings now and nobody has noticed - because I wasn't needed in the first place.

Bonus tip: Most calendar apps let you add a "cost" field to meeting descriptions. Start doing it. Once people see "$850" on a meeting invite, the unnecessary ones start disappearing.

reddit.com
u/macebooks — 4 days ago