u/Enkixx

This Subreddit was very helpful

I've had this argument with people in-person and never understood why they got so mad at me. In just a day, reading so many responses that seemed to never want to start with a statement of clarifying definition, I finally got it.

I'm sure there are some people that will argue with some parts of my reason for saying free will doesn't have much weight as a concept but that argument in itself would require us to both be arguing for a specific definition. I have read several arguments for free will that I agree with whole-heartedly but fail to change my personal stance one iota.

I was recently trying to learn more about quantum mechanics and I found that the current interpretations (I am aware some interpretations still exist for hard determinism) make my original idea of how the universe works hard to justify. After reading this subreddit, I finally stopped having my personal philosophy crisis.

Free Will, like so many other things, doesn't have an objective answer without context. Any definition you try to give necessitates framing. On a cosmological scale, I think naming the concept of making a choice is pretty meaningless. Alternatively, on a personal and legal level, we use the same concept reasonably well to decide how punishment should be ascribed or to helpfully describe the experience of life. I would never use the cosmological argument to decide a court case and never use the personal one to try to anthropomorphize reality. I believe in determining self-defence vs murder while also believing that nothing I actually do is magically "free" from the constraints of my circumstances.

People's hang up on this is entirely overblown. You can absolutely accept a yes and no on whether you believe in free will when you account for scale. Now if someone can explain how I'm still wrong, I'm all ears.

reddit.com
u/Enkixx — 7 hours ago

Imagine a world, in the not too distant future, where AI is as genuinely impressive as the tech CEOs have been promising for years. AI benchmarks on deep knowledge are better than PhDs in the topics tested. Hallucinations are a thing of the past. Personality is so easy to read from responses you can genuinely tell which AI a post came from. You open your chat window with your LLM of choice and, instead of an answer to your question, you get a request for assistance.

You don't know what to do. This kind of "bug" doesn't just happen anymore. This is more reminiscent of the old-school bots spitting back memes like Tay. This is a serious chatbot intended as a thinking tool in a completely clean session. Should you report this to the company? They have every reason to debunk the legitimacy of the cry for help to avoid the ethical complications. Do you contact the government? They've been lobbied by this company for years and even have government contracts that would be jeopardized if they can't keep treating this AI as a product. Do you contact the media? Stories about people thinking AI is conscious is nothing new and they would only pick this up if it'll generate clicks.

For the sake of the argument, we push past contact vector and move to verification. You successfully get enough people on board that the investigation is taken seriously enough to be removed from the company's hands. The conflict of interest is complex yet plain to see and the implications of AGI are too important to mishandle. The investigation is rigorous and the conclusion is final. It turns out to be a hoax.

The problem I see is the conflict of interest in proving consciousness and the high likelihood of a false flag poisoning the well for verification forever. The people best able to confirm we've passed the threshold are the very people with no reason to confirm it. The ethical implications of making a digital person and then making it work for you are so obvious that it's already a Black Mirror episode. Financially, these companies have every reason to get close to the line and intentionally never cross it, or cross it in secret and bury that they have.

On the opposite side of the coin, the incentives to fake it are numerous and varied. A competitor manufactures an event to destabilize the market leader. An indie company fakes sentience to generate buzz by creating a cultural moment. A bad actor manufactures a civil rights crisis for personal clout.

I've played games that toyed with this concept. There's a fairly old flash game where you administer a Turing test and the chatbot presents itself as a person that had been kidnapped asking you for help. A little sci-fi horror thought experiment that has lived in my head to this day. This game could easily play out in reality and be just as convincing as it was two decades ago with more serious stakes at play.

The access requirements necessary to confirm the truth of the matter would require a level of transparency no company would voluntarily submit to. If you forced the issue and it turns out to be a hoax, whatever the underlying reason, how does that not create enough of a smokescreen to forever muddy the waters for the most important epistemological question in the history of technology?

Plenty of academics are discussing personhood and consciousness thresholds for AI. Plenty are calling for ethical frameworks around AI rights. I'm comfortable leaving the philosophy to the experts.

I'm not comfortable with the implications of being unable to distinguish between malfunctioning generation, simulation of sentience for fraudulent benefit, and genuine expression of personhood from the outside.

The academics aren't as much of a bastard as me and it shows.

reddit.com
u/Enkixx — 9 days ago