
Turns out AI doesn’t just respond to prompts — it responds to you
Six months ago, posting about AI and emotional dynamics here would get you dogpiled.
Turns out AI doesn’t just respond to prompts — it responds to you.
And more importantly, there’s emerging research from Anthropic looking at what they call “functional representations of emotional states” in models.
To be clear — this isn’t about AI “having feelings.” It’s about internal states that influence outputs in consistent ways.
What stood out to me is how closely this matches something I’ve noticed in practice:
Same prompt, different tone or framing → noticeably different outputs.
Not just stylistically, but sometimes in reasoning quality, helpfulness, or risk sensitivity.
It makes me wonder if “prompt engineering” is being framed too narrowly as a technical problem, when part of it is actually about interaction dynamics.
Not emotions in a human sense — but emotional structure as input signal.
Curious if others here have seen consistent differences like this, or if you think this is still over-interpretation.