I've been teaching programming for 8 years. The students who use AI from day one are learning something, but it's not programming.
This isn't a "AI bad" post. I use AI constantly. But I need to talk about what I'm seeing in students who start learning with AI as a crutch versus those who don't.
The AI-first students can ship. They can take a problem description and produce something that works faster than anyone I've ever taught. Genuinely impressive output speed.
What they can't do: debug without AI. Reason about why their code is slow. Explain what a variable actually holds at runtime. Read an error message and know where to look. Understand what happens when something fails.
I had a student last month who built a working web app in their second week. Legitimately functional. Then I asked them to add a console.log to see what a variable held at a specific point in execution. They didn't know where to put it. They didn't know what "at a specific point in execution" meant. They'd built the whole thing by describing features to AI and accepting outputs.
The mental model of "code as a sequence of instructions the computer executes" never formed. They skipped straight to "code as a thing that does stuff when you describe it right."
That mental model works until it doesn't. When the AI gives you something wrong and you can't tell it's wrong. When you need to optimize something and don't know where the time is going. When you're in a job interview and there's no AI.
The students who learned the hard way first — who struggled with loops, who debugged their own pointer errors, who had to actually understand execution flow — those students use AI well. They know what they're asking for. They can verify the output. They use it as a tool.
The others are building on a foundation that isn't there yet.
Not sure what the right answer is. Curious if others who learned recently feel like they skipped something important, or if I'm just being an old man yelling at clouds.