u/Anxious_Current_640

▲ 33 r/cogsci

The more I use LLMs, the more I notice I’m reaching for them before even attempting to think through a problem myself. It’s become reflexive. And honestly, it’s starting to worry me. I feel like my ability to reason through ambiguous problems independently has gotten weaker.

The part that makes this hard is that LLMs are genuinely getting better fast. So I’m caught between two uncomfortable questions:

  1. Which skills are still worth developing deeply, and which are safe to offload?

  2. When I’m working on something, how do I decide which parts I should fully delegate to AI versus which parts I need to own, not just for output quality, but to actually keep my brain sharp?

I work in data science and ML, so this isn’t purely philosophical for me. There’s real tension between moving fast with AI assistance and staying technically grounded enough to catch bad outputs, debug novel problems, coming up with pragmatic and creative approaches, and actually grow.

Has anyone found a practical framework for this? Not “just use it less” I mean something more intentional about where to draw the line and why.

reddit.com
u/Anxious_Current_640 — 12 days ago