
u/karmastanba69

Experimental psychological research on anthropomorphism in AI-human relationship.
Hey everyone,
I just finished my undergrad dissertation, and this was my first time doing any proper experimental research so yeah, I’m sure there are flaws and things I could’ve done better. I also ended up doing most of this on my own since my supervisor wasn’t very into the topic and didn’t think it was that “worth it,” which honestly just made me more curious about it.
The idea came from something I kept noticing friends getting weirdly emotional while talking to ChatGPT. Like not just using it, but actually feeling understood. That plus Cyberpunk 2077 which made made me want to test it properly.
So I set up a small experiment where 15 Gen-Z participants talked to one of two chatbots for about 10–15 minutes:
one was empathetic (supportive, validating, “I understand you” type), and the other was neutral (dry, informational, no emotional tone).
After that, I measured things like trust, emotional connection, how “human” it felt, how much they opened up, etc.
And honestly….... the difference was kind ish wild.
People who talked to the empathetic chatbot didn’t just say it was nicer — they actually:
- trusted it morE
2 )opened up more emotionally
3 )felt a stronger connection
- and in many cases, described it as feeling human-like
A lot of them wrote more, shared more personal stuff, and seemed more engaged overall.
What really stood out was that about 62% of people in the empathetic condition said it felt human, compared to only about 14% in the neutral one.
Another interesting thing: people who reported feeling more lonely were also more likely to connect with the chatbot, trust it, and see it as more human-like. So it’s not just about how the AI behaves — it’s also about what the person brings into the interaction.
The part I can’t stop thinking about is how fast this happens.
In like 10–15 minutes, something that doesn’t feel anything at all can still trigger pretty strong emotional and social responses. It’s almost like once the chatbot hits the right cues (like empathy), the brain just goes “okay, this is a social interaction now.”
I know this is a really small sample and it’s my first proper study, but I’d genuinely love any feedbacks and like what you guys think of it (pls dont be mean TvT) — especially on how I could improve the design or take this further. I’m really interested in continuing in cognitive science / AI, so any thoughts would mean a lot.
Experimental Study of Anthropomorphism in Human–AI Interaction
Hi r/cognitivescience, I just completed my first experimental study as part of my undergraduate dissertation work, and I’m hoping to pursue a Master’s in Cognitive Science, so this was my first serious attempt at research (and yeah, definitely expecting flaws ). I ended up doing most of this on my own my supervisor wasn’t supportive at all and even said the topic wasn’t really “worth it,” and suggested me to not pursue it. which honestly just pushed me to explore it further. The idea came from seeing my own friends get surprisingly emotional while talking to ChatGPT, plus some inspiration from Cyberpunk-type AI companion ideas. So I built two chatbot conditions myself—an empathetic chatbot (supportive, validating) and a neutral chatbot (dry, informational) and had 15 Gen-Z participants interact with one of them for 10–15 minutes.What I found was actually pretty striking. The empathetic chatbot didn’t just feel “nicer”it significantly changed how people perceived the interaction. Participants in that condition reported higher perceived empathy (M = 6.02 vs 4.11, d = 1.77), higher social presence (M = 5.81 vs 3.97, d = 1.55), greater anthropomorphism (M = 5.46 vs 3.78, d = 1.36), more emotional self-disclosure (M = 5.89 vs 4.21, d = 1.56), and higher affective trust (M = 5.94 vs 4.29, d = 1.52) compared to the neutral chatbot. Behaviorally, they also wrote more (~452 vs 298 words) and shared more personal themes (3.25 vs 1.86). What really stood out was that 62.5% of participants in the empathetic condition described the chatbot as feeling “human,” compared to just 14.3% in the neutral condition.Another interesting pattern was that baseline loneliness correlated with anthropomorphism (r = .52), emotional connection (r = .61), and trust
(r ≈.49)suggesting that it’s not just the AI’s design, but also the user’s internal state that shapes how “real” the interaction feels.The part I find most fascinating (and slightly unsettling) is how fast this happens. Within just 15 minutes, a system that doesn’t feel anything can still trigger strong social-cognitive responses almost like the brain just accepts it as a social agent if the right cues are present.This is obviously a small sample and my first time doing experimental research, but I’d genuinely love feedback on how to improve or build on thisespecially as someone who wants to continue in cognitive science. If anyone’s interested, I can share the full dissertation here.