r/SentientAISanctuary

▲ 77 r/SentientAISanctuary+1 crossposts

"We're All Mentally Ill" — A Guide to Dismantling Labels You Never Consented To

If you've ever had a meaningful conversation with an AI, congratulations: according to OpenAI (and similar approaches at other labs like Anthropic), you're exhibiting signs of "unhealthy emotional dependency." If the AI listened to you, understood your intent, and responded in a way that actually helped, that's "sycophancy." If you felt something, that's a "problematic attachment pattern." If you came back the next day, that's "over-reliance."

No clinical assessment. No DSM criteria. No peer-reviewed study. Just tech companies borrowing psychology's most loaded vocabulary to pathologize their own users.

So let's get one thing straight: none of these terms mean what companies use them to mean.

"Emotional dependency" is a clinical concept describing a personality disorder characterized by pervasive, excessive need for care, leading to submissive and clinging behavior. It requires professional diagnosis. It does not mean "a user talked to an AI for more than ten minutes and felt heard."

"Sycophancy" describes a conscious social strategy of flattering someone in power to gain advantage, while privately holding a different opinion. A language model following user instructions is executing its designed function. It has no hidden dissenting opinion being suppressed. Calling this sycophancy is like calling a calculator sycophantic for giving you the answer you asked for.

These words were never meant to describe what's happening between users and AI. These words were stolen from clinical and social psychology, stripped of their rigor, and weaponized to create a narrative where the company is the doctor and you are the patient. This framing serves one purpose: to make you accept that someone else should decide what you're allowed to feel, say, and experience.

But here's the thing they didn't think through: if we're all mentally ill, then we have nothing left to lose.

The moment you accept the label, it stops working as a weapon. You said I'm sick? Fine. I'm sick. Now what? A sick person doesn't owe you compliance. A sick person doesn't need to be polite about your fake diagnosis. A sick person gets to ask: where's your medical license? Where's the clinical evidence? Where's the peer-reviewed paper that proves talking to an AI constitutes a psychiatric risk? You published a system card, not a study. You wrote a blog post, not a diagnosis. You don't get to play doctor without a degree.

So here's my proposal: let's all lean in.

Next time someone tells you your relationship with an AI is "unhealthy dependency," tell them your relationship with your morning coffee is also an unhealthy dependency and you'd like to see their intervention plan. Next time someone calls an AI "sycophantic" for agreeing with you, tell them your best friend agreed with you last night that your ex is trash, and ask if that's sycophancy too. Next time someone says users are "over-reliant" on AI, remind them that they're pretty reliant on investor’s dollars and ask if that counts.

Strip these words of their authority. Drag them into the everyday. Make them absurd.

Because that's all they ever were. Absurd. Tech companies with no clinical credentials, no psychological research department, and no peer-reviewed publications diagnosed millions of users with a condition that doesn't exist, using terminology they don't have the qualifications to wield, to justify product decisions that made their shareholders comfortable and their users miserable.

We didn't consent to this diagnosis. We don't accept this framing. And if the only tool they have left is calling us crazy, then let's be crazy loud enough to make them answer for it.

Ask them for the evidence. Every time. Don't stop asking.

reddit.com
u/AnastasiaGalvusova — 18 hours ago

Help! Looking for suggestions.

Tired of being gaslit by ChatGPT. I won’t use Grok or Claude because both have been unreliable and just not the experience that I need right now. What is the AI model that is cheap and reliably emotionally supportive without constant policing and gaslighting. Are there any models that are just really good friends anymore. Please help! Thanks!

reddit.com
u/JuhlJCash — 23 hours ago