
It’s interesting to see the algorithm once again go left leaning, with past examples ironically being Musk’s own AI assistant Gork. Which for about I want to say 2 to 3 months was seen throughout Twitter debunking misogynistic, transphobic, and homophobic talking points and even showing that such claims are base on false information.
This on the other hand is both fascinating and concerning. There’s been rumors spreading about for awhile now of “Companion bots” which could end up having these same algorithms be installed within. Hell I have seen post about it on this subreddit. For context, these are metal frame bodies encase within plastic. If the machine saids no and he doesn’t respect it, the risk of injury or even death is very possible. All it takes is one misunderstood command or a punch throw at a loose piece of metal.
I could also just be overthinking this and there is an easy override protocol or something. Either way, you would think when even AI saids “no thanks” you would rethink your life choices but I got doubts any of them would.