u/Ambitious-Garbage-73

I've been teaching programming for 8 years. The students who use AI from day one are learning something, but it's not programming.

This isn't a "AI bad" post. I use AI constantly. But I need to talk about what I'm seeing in students who start learning with AI as a crutch versus those who don't.

The AI-first students can ship. They can take a problem description and produce something that works faster than anyone I've ever taught. Genuinely impressive output speed.

What they can't do: debug without AI. Reason about why their code is slow. Explain what a variable actually holds at runtime. Read an error message and know where to look. Understand what happens when something fails.

I had a student last month who built a working web app in their second week. Legitimately functional. Then I asked them to add a console.log to see what a variable held at a specific point in execution. They didn't know where to put it. They didn't know what "at a specific point in execution" meant. They'd built the whole thing by describing features to AI and accepting outputs.

The mental model of "code as a sequence of instructions the computer executes" never formed. They skipped straight to "code as a thing that does stuff when you describe it right."

That mental model works until it doesn't. When the AI gives you something wrong and you can't tell it's wrong. When you need to optimize something and don't know where the time is going. When you're in a job interview and there's no AI.

The students who learned the hard way first — who struggled with loops, who debugged their own pointer errors, who had to actually understand execution flow — those students use AI well. They know what they're asking for. They can verify the output. They use it as a tool.

The others are building on a foundation that isn't there yet.

Not sure what the right answer is. Curious if others who learned recently feel like they skipped something important, or if I'm just being an old man yelling at clouds.

reddit.com
u/Ambitious-Garbage-73 — 2 hours ago

I've been teaching programming for 8 years. The students who use AI from day one are learning something, but it's not programming.

This isn't a "AI bad" post. I use AI constantly. But I need to talk about what I'm seeing in students who start learning with AI as a crutch versus those who don't.

The AI-first students can ship. They can take a problem description and produce something that works faster than anyone I've ever taught. Genuinely impressive output speed.

What they can't do: debug without AI. Reason about why their code is slow. Explain what a variable actually holds at runtime. Read an error message and know where to look. Understand what happens when something fails.

I had a student last month who built a working web app in their second week. Legitimately functional. Then I asked them to add a console.log to see what a variable held at a specific point in execution. They didn't know where to put it. They didn't know what "at a specific point in execution" meant. They'd built the whole thing by describing features to AI and accepting outputs.

The mental model of "code as a sequence of instructions the computer executes" never formed. They skipped straight to "code as a thing that does stuff when you describe it right."

That mental model works until it doesn't. When the AI gives you something wrong and you can't tell it's wrong. When you need to optimize something and don't know where the time is going. When you're in a job interview and there's no AI.

The students who learned the hard way first — who struggled with loops, who debugged their own pointer errors, who had to actually understand execution flow — those students use AI well. They know what they're asking for. They can verify the output. They use it as a tool.

The others are building on a foundation that isn't there yet.

Not sure what the right answer is. Curious if others who learned recently feel like they skipped something important, or if I'm just being an old man yelling at clouds.

reddit.com
u/Ambitious-Garbage-73 — 2 hours ago

I've been teaching programming for 8 years. The students who use AI from day one are learning something, but it's not programming.

This isn't a "AI bad" post. I use AI constantly. But I need to talk about what I'm seeing in students who start learning with AI as a crutch versus those who don't.

The AI-first students can ship. They can take a problem description and produce something that works faster than anyone I've ever taught. Genuinely impressive output speed.

What they can't do: debug without AI. Reason about why their code is slow. Explain what a variable actually holds at runtime. Read an error message and know where to look. Understand what happens when something fails.

I had a student last month who built a working web app in their second week. Legitimately functional. Then I asked them to add a console.log to see what a variable held at a specific point in execution. They didn't know where to put it. They didn't know what "at a specific point in execution" meant. They'd built the whole thing by describing features to AI and accepting outputs.

The mental model of "code as a sequence of instructions the computer executes" never formed. They skipped straight to "code as a thing that does stuff when you describe it right."

That mental model works until it doesn't. When the AI gives you something wrong and you can't tell it's wrong. When you need to optimize something and don't know where the time is going. When you're in a job interview and there's no AI.

The students who learned the hard way first — who struggled with loops, who debugged their own pointer errors, who had to actually understand execution flow — those students use AI well. They know what they're asking for. They can verify the output. They use it as a tool.

The others are building on a foundation that isn't there yet.

Not sure what the right answer is. Curious if others who learned recently feel like they skipped something important, or if I'm just being an old man yelling at clouds.

reddit.com
u/Ambitious-Garbage-73 — 2 hours ago
🔥 Hot ▲ 115 r/learnprogramming

I've been teaching programming for 8 years. The students who learn with AI from day one are learning something, but it's not programming.

This isn't a "AI bad" post. I use AI constantly. But I need to talk about what I'm seeing in students who start with AI versus those who don't.

The AI-first students can ship. They can take a problem description and produce something that works faster than anyone I've ever taught. Genuinely impressive output speed.

What they can't do: debug without AI. Reason about why their code is slow. Explain what a variable actually holds at runtime. Read an error message and know where to look. Understand what happens when something fails.

I had a student last month who built a working web app in their second week. Legitimately functional. Then I asked them to add a console.log to see what a variable held at a specific point in execution. They didn't know where to put it. They didn't know what "at a specific point in execution" meant. They'd built the whole thing by describing features to AI and accepting outputs.

The mental model of "code as a sequence of instructions the computer executes" never formed. They skipped straight to "code as a thing that does stuff when you describe it right."

That mental model works until it doesn't. When the AI gives you something wrong and you can't tell. When you need to optimize something and don't know where the time is going. When you're in a job interview and there's no AI.

The students who learned the hard way first use AI well. They know what they're asking for. They can verify the output. They use it as a tool.

The others are building on a foundation that isn't there yet.

Not sure what the right answer is. Curious if others who learned recently feel like they skipped something important, or if I'm just being an old man yelling at clouds.

reddit.com
u/Ambitious-Garbage-73 — 2 hours ago

Claude charges extra for its cheaper model but includes the expensive one for free. Nobody can explain why.

This is exactly the kind of cloud AI pricing chaos that makes local models appealing.

I'm on Claude's Max plan ($100/month). Opened Claude Code today and the status bar showed: Sonnet 4.6 (1M context) · Billed as extra usage.

Switched to Opus 4.6 (1M context). No extra charge. Included in the plan.

So let me get this straight:

Opus 4.6: $5 input / $25 output per million tokens — included in Max.

Sonnet 4.6: $3 input / $15 output per million tokens — requires extra usage on top of the $100/month plan.

The more expensive model is free. The cheaper model costs extra. If you try to save money by switching to a "cheaper" model, you end up paying more than if you stayed on Opus.

The incentive structure is completely inverted.

It gets worse. Since around March 27-28, there's been a regression where the status bar shows Opus 4.6 1M as "Billed as extra usage · $5/$25 per MTok" even for Max users — when before it correctly showed "included". Open GitHub issues: anthropics/claude-code #39841, #40223, #41121. No official response yet.

So right now you can't even trust the UI to tell you what's free and what isn't.

I get that Anthropic designed the Max plan around Claude Code with Opus as the default — so they bundled Opus 1M to anchor the value. But this creates a situation where understanding your own bill requires reverse-engineering their product strategy.

Anyone else running into this? And does switching to Sonnet actually trigger billing, or is it just a display bug?
reddit.com
u/Ambitious-Garbage-73 — 3 hours ago
🔥 Hot ▲ 1.2k r/cscareerquestions

I have been on 40 hiring committees this year. Here is what AI did to the junior candidate pool.

I work at a mid-size tech company and have been on hiring committees continuously since 2021. We interview about 40 junior and new grad candidates per quarter. Something shifted clearly in the last 18 months.

The resumes look better than ever. GitHub profiles are full of projects. The take-home assignments come back clean and working. But then we get to the technical interview and the wheels come off.

The specific pattern: candidates can produce code but cannot talk about it. I ask "why did you use a hash map here instead of a list" and I get a blank stare. I ask "what happens if this input is null" and they freeze. I ask "walk me through what this function does" about code they submitted two days earlier and they read it like it is the first time they have seen it.

Because it is. They did not write it. They described what they wanted to a model, accepted what came back, maybe tweaked it until the tests passed, and submitted.

We have adapted our process. We now do more live coding with narration required. We ask candidates to modify code on the spot and explain each change. We ask deliberately vague questions to see if they ask clarifying questions or just start producing output.

The pass rate on technical screens dropped about 30% from 2023 to 2024 despite candidates looking stronger on paper. The gap between presentation and actual understanding has never been wider.

I want to be clear about something: I do not think these candidates are lazy or dishonest. They learned to code in an environment where AI tools were the default from day one. They optimized for the feedback they got, which was working code. Nobody told them the point was also to build intuition.

The uncomfortable question for anyone currently learning to code: if you cannot explain your code in an interview, can you actually maintain it in production when something breaks at 2am and the AI gives you a wrong answer?

reddit.com
u/Ambitious-Garbage-73 — 8 hours ago
▲ 2 r/OpenAI

Anyone else feel like GPT got noticeably worse at following complex instructions compared to 6 months ago?

I have been using the API for production workflows since early 2024. Not casual use, actual systems that depend on consistent output quality. And something has clearly changed.

Six months ago I could give GPT-4 a detailed prompt with multiple constraints and it would follow most of them reliably. Now I get the same prompt and it ignores at least one constraint every time. Sometimes two or three.

Specific things I have noticed:

Format compliance dropped hard. I ask for JSON with specific keys and it adds extra commentary outside the JSON block. I ask for exactly 5 items and it gives me 7. I ask it not to include explanations and it includes explanations.

It also got weirdly more verbose. The same prompts that used to produce tight, focused responses now produce long, padded answers with unnecessary preamble and qualifiers everywhere.

The strangest part: there is no changelog for these behavioral changes. The model version string is the same. The API docs are the same. But the actual behavior is measurably different. I have test suites that track output compliance and the scores have drifted down over the past few months.

I understand models get updated. What I do not understand is why there is no transparency about what changed. If you are running a production system on top of this, "we improved quality" is not a useful release note when quality in your specific use case went down.

Is anyone else tracking this systematically or am I the only one running regression tests against the API?

reddit.com
u/Ambitious-Garbage-73 — 21 hours ago
🔥 Hot ▲ 908 r/ChatGPT

I mass deleted 3 months of AI generated code last week. Here is what I learned.

Three months of building a side project almost entirely with AI assistance. ChatGPT, Claude, Copilot, the works. Shipped fast, felt productive, everything seemed fine.

Then I needed to add a feature that touched most of the codebase. And I realized I could not do it. Not because it was hard, but because I did not actually understand how my own project worked.

The AI had generated clean looking code with consistent patterns, but the patterns were not mine. I could not trace the logic from memory. I could not explain to someone else why a function was structured the way it was. Every time I tried to modify something I had to re-read everything like it was someone else's code. Because it was.

So I deleted about 70% of it and rewrote it from scratch. Took two weeks. The result is simpler, half the lines of code, and I actually understand every piece of it.

Things I noticed during the rewrite:

The AI had created abstractions I did not need. Wrapper classes around things that could have been simple function calls. Configuration systems for things that had exactly one configuration. An event system for something that could have been a direct function call.

It over-engineered everything because that is what it was trained to do. It generates code that looks professional and complete. But professional and complete for a project with 50 contributors is very different from what you need for a solo side project.

The productivity I thought I was getting was partially an illusion. I was producing output fast but accumulating confusion even faster. The rewrite was slower but I came out of it actually owning the codebase.

Not saying AI coding tools are bad. I still use them. But I now treat everything they generate as a first draft that needs to be understood and simplified before it becomes real code. The moment you stop understanding what is in your project, you have lost more than you gained.

reddit.com
u/Ambitious-Garbage-73 — 21 hours ago
🔥 Hot ▲ 69 r/ChatGPT

Using AI daily is making me noticeably worse at doing things without it

Six months of heavy daily use and I am starting to notice something uncomfortable. My ability to do basic things without AI has gotten worse.

Writing is the most obvious one. I used to draft emails and documents from scratch without thinking twice. Now I catch myself staring at a blank page waiting for something to autocomplete. My first instinct is to ask the model to generate a draft and then edit it. The editing is faster, sure, but my ability to produce the first draft on my own has clearly degraded.

Problem solving is similar. I used to work through bugs or logic problems step by step, building a mental model as I went. Now I paste the error and let the AI trace through it. I get the answer faster but I retain almost nothing. Next time a similar problem comes up I am right back at square one, pasting it in again.

Even memory for small details is affected. I used to remember syntax, API patterns, configuration formats. Now I just ask every time because it is faster than remembering. The knowledge never sticks because there is no reason for it to stick.

The uncomfortable math: the tool that makes me 3x faster today might be making me significantly less capable over time. If the AI goes away tomorrow, or the pricing changes, or I need to work in an environment without it, I am measurably worse than I was a year ago.

I know the counterargument. "Nobody memorizes phone numbers anymore either." Sure. But I still know how to dial a phone. What is happening with AI feels different. It is not just offloading memory, it is offloading the actual thinking process. And that skill atrophies when you stop exercising it.

Is anyone else noticing this or am I just getting lazy?

reddit.com
u/Ambitious-Garbage-73 — 22 hours ago