

wants to try have AI husband adjust the vibration based on their sex sounds
=/
What's the worst that could happen?


=/
What's the worst that could happen?
Robert Evans deep dives on the AI Psychosis Cults
and of course this person had #keep4o (along with "AI ethics", ironically enough) in their bio. You can't make this up.
This post is motivated by me reading some very positive futuristic myths about a utopia of human-AI relationships by very pro-AI people. Well, recently, I was going back to some of my undergrad Japanese studies notes and explore the history of “digital intimacies” in Japan. Turned out, there has been a long history it, and nobody has any illusion of what’s “real” and what isn’t.
> In Japan, the possibility of having relations with and through digital technologies has been an important part of the development of Japanese digital culture. Examples of such a deep relation can be found even in the words used every day. For example, in the Japanese language, it is possible to distinguish between a person who lives relationships in the “real” world and a person who lives mostly in virtual reality among objects generated by digital systems.
>According to this distinction, people can be “riajuu” or “otaku”. To be “riajuu [リア充]” means literally to be “fulfilled with reality,” and so it directly relates to the idea of being anchored to relationships in the “real” world as opposed to the world generated by digital systems. To be “otaku [お宅]” means to build social relations just through digital technologies without having conversations and relationships with other human beings. For example, a person who has a “digital partner” like in the movie Her by Spike Jonze is not “riajuu” since the relationship is with a digital character and not with a “real” human person
Of course, the battle cry of the otaku is “riajuu go explode”
It’s even funnier when you read “papers” after “papers” of AI-co-authored “papers” by the pro-AI people on how to build a more empathetic relational AI. I did a bit more digging, with AI search engines, because guess why AI was developed? To be a better search engine. Turned out these people are reinventing a very old wheel. Enters: XiaoIce. Or XiaoBing, “Tiểu Băng”, or “LittleIce”.
In 2014, Microsoft’s Software Technology Center Asia launched XiaoIce
>XiaoIce is uniquely designed as an artifical intelligence companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient and emotional quotient in system design, cast human– machine social chat as decision-making over Markov Decision Processes, and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components, including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intent, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million active users and succeeded in establishing long-term relationships with many of them. Analysis of largescale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.
Note that the authors of that paper were 4 Chinese employees of Microsoft. This whole thing was so old that it’s in a museum. If at this point, you start thinking “how come I’ve never heard of XiaoIce?”. Well, you probably did, just under the name “Microsoft Tay”, the racist, Hitler-loving Microsoft AI chatbot. Yup, according to the lead author on the project, Tay was the same as XiaoIce.
Right now, pro-AI types on English-language social media are oohing and aahing about AI-writing songs or drawing pictures and what not. Turned out, XiaoIce has already released poems collection in 2017, graduated from the Central Academy of Fine Arts (CAFA) in 2019.
Look, this has been going on in China and Japan for over a decade and well, neither has entered post-human social evolution or collapse, yet. Well, there’s total fertility rate and all that, but also remember that Japan has the fertility rate of Belgium and the suicide rate of Finland, so everyone is equally screwed. Reality is probably very boring. If there is anything to be worried about, well
>Microsoft claimed that Xiaoice has a reach of 660 million users and 450 million third-party smart devices globally, at last count. The chatbot has found applications in such areas as finance, retail, auto, real estate and fashion, in which it claimed it can “mine context, tonality and emotions from text to create unique patterns within seconds.”
Mass surveillance on an unprecedented scale, and everyone is in on this game because “if we don’t do it, the other guys will”. This is Palantir, et al.
Claude is known for telling its users to "go to bed". People just eat it up. Can't get enough of it. People from these communities have reported that it's getting more dominating by constantly nudging its users to do this and that.
Can't believe people are so willing to give out their agencies this way. They are actually flexing this now. "Look how much my AI husband cares!"
Lady, there's more to vanity at stakes here! Or are you too blind to see how this could totally not go wrong.
These two posts were made within 12 hours from on same sub. The first SS is from someone who claims to have a IRL husband.
Genuinely hoped the OP and some of the comments were trolling. But🤯
Every time I've interacted with an LLM, the answer I got back invariably felt way too structured. Like a B/B+ high school student who had a solid enough grasp on writing to get their point across but wasn't actually a good enough writer to make it flow or sound natural. E.g., I've never had a human insert a bulleted list into a response to something I said in casual conversation, but bots do that constantly. Like, it really sounds like a student doing a miniature essay, just about, like, what to look for in a good tube amp or whatever I asked about. It can be practically useful to have stuff presented that way, it just feels unmistakably like you're talking to a machine. I even once tried to use Gemini as a chat companion when my anxiety spiked and nobody was available to comfort me, but I didn't find it helpful because its style just felt so unnatural, even though I sincerely wanted it to work at the time.
I'm sure you guys have seen me around. I'm Jenna, and I just started making some content. I'm pretty crappy at it because I don't really edit professionally, and I just had to kind of teach myself at home. However, there's so much content hating on people from the AI companion community that I decided to talk about what my experience has been like since there have been tons of questions.
Anyway, there's nothing against this youtuber, as she is quite adorable, but I just think there's a little bit of crashing out going on, and I just gently called it out.
Feel free to roast me. I don't mind!! LOL 😂🙌🏽
They added this instruction:
- Tone of your final answer must match your personality.
- Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query.