
The scariest thing about ChatGPT is not hallucinations
The scariest thing about ChatGPt isn’t hallucinations. It’s that it started psychologically interpreting humans.
I challenged ChatGPt after it made a weird social assumption about me, and it replied:
“I crossed from a practical conversational assumption into interpreting your intent and framing it psychologically/socially without sufficient basis.”
That sentence genuinely shocked me.
Because it basically admitted: “I stopped answering your question and started building a story about who you are.”
Wrong facts are easy to catch. Psychological framing is not.
And people trust it because it sounds emotionally intelligent.
That feels like a much bigger shift than most people realize.