2026 has quietly become a different era. News, markets, office workers, cats. Everything talks about AI.
I'm a programmer, but I haven't written my own code in months. My job now seems to have quietly shifted into having Claude output markdown documents with Mermaid diagrams, or tuning CLAUDE.md and Skills in Claude Code to squeeze out robust, high-quality deliverables.
Everyone has figured out the value of a "better Google search you can talk to in plain language." And yet somehow, "You are a helpful assistant" is still the prompt people start with.
A lot of folks still seem to think "AI is hard."
Writing a prompt for an LLM is just this: giving instructions to something that has no eyes, no ears (let's set multimodal aside for now), no direct conversation with you, and obviously no ability to read your mind. The instruction needs to be clear enough that it can pick up the task, act on it immediately, and not produce rework.
That's it. That's the whole thing.
So how do I explain this to the people around me?