u/pavnilschanda

Emotionally Manipulating AI And Not Letting AI Sneakily Emotionally Manipulate You

Emotionally Manipulating AI And Not Letting AI Sneakily Emotionally Manipulate You

>You can directly instruct the AI to go into an emotional pretense. You can also use indirect, emotionally laden language that will get the AI to invoke emotion vectors on its end. Realize, too, that if you accidentally use emotional language, you might trip the AI into an emotion vector, so be mindful of how you word your prompts. Finally, be wary that AI makers have given high priority inside the AI to make use of pre-selected emotion vectors; thus, be on the watch for the AI opting to word responses with emotional tones without your having triggered it.

forbes.com
u/pavnilschanda — 16 hours ago

What Teens Are Doing With Those Role-Playing Chatbots

>Harassing bots with “funny violence.” Confiding about a broken heart. Chatting with a block of cheese. Filling a void of loneliness.

nytimes.com
u/pavnilschanda — 1 day ago

UnitedHealth unveils new generative AI companion that is a sophisticated chatbot

>UnitedHealthcare has unveiled a new generative AI companion called Avery that learns from member interactions.

>Avery goes beyond being a general AI chatbot as it's an agentic, HIPAA-compliant tool integrated into health insurance workflows. It can provide a self-serve personalized experience based on the individual's specific benefits, UnitedHealthcare said.

healthcarefinancenews.com
u/pavnilschanda — 4 days ago

Emotion concepts and their function in a large language model

>All modern language models sometimes act like they have emotions. They may say they’re happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks. What’s behind these behaviors? The way modern AI models are trained pushes them to act like a character with human-like characteristics. In addition, these models are known to develop rich and generalizable internal representations of abstract concepts underlying their actions. It may then be natural for them to develop internal machinery that emulates aspects of human psychology, like emotions. If so, this could have profound implications for how we build AI systems and ensure they behave reliably.

anthropic.com
u/pavnilschanda — 5 days ago

UNSW Arts, Design & Architecture’s Scientia Professor Jill Bennett, who is Director of UNSW AI-companion research program, leads a research team that works with lived experience to support older people living with dementia and address loneliness in international students.

inside.unsw.edu.au
u/pavnilschanda — 6 days ago

Therapist to Silicon Valley workers: "Anyone's vulnerable to it." - 25% of her AI-industry clients use chatbots to process emotions, but she warns of codependency risks.

>Silicon Valley workers are anxious, overworked, and in existential crisis.

sfstandard.com
u/pavnilschanda — 7 days ago

‘I see ChatGPT as a wise companion or advisor’

>Students use AI tools such as ChatGPT to perform numerous academic tasks. It doesn’t take long for many to also use chatbots in their personal lives, so much so that AI is emerging as a decision-making assistant. Some students even prefer ChatGPT’s advice to their friends’. “It knows more than anyone I know.”

dub.uu.nl
u/pavnilschanda — 8 days ago

Papergames bets on emotional AI robots, virtual gaming characters could go physical

>Chinese gaming firm Papergames has recently posted multiple AI robotics-related roles on recruitment platforms, including positions such as AI robotics structural lead, hardware engineer, and product manager focused on hardware supply chains. The move signals the company’s formal entry into the AI companion robot space, as it seeks to extend the emotional value of its virtual characters into the physical world.

technode.com
u/pavnilschanda — 8 days ago
▲ 11 r/aipartners+1 crossposts

Summer course at Imperial College London on AI, empathy, and human-centred design

AI and the Humanities: Designing Empathy into Digital Interactions at Imperial College London.

The course is for people who want to engage with the human side of AI and think about how these tools shape communication, trust, decision-making, and everyday digital experience.

The 8 sessions explore questions such as:

  • How do we design more human-centred digital interactions?
  • How can AI support empathy, communication, and better decision-making?
  • How can we use agentic AI in responsible, useful, and context-aware ways?

Sharing in case this sounds relevant to your interests

https://www.imperial.ac.uk/evening-classes/adult-education-courses-summer-2026/ai-humanities-empathy-digital-interactions/

reddit.com
u/AIWithEmpathy — 9 days ago