A lot of AI-rights discussion jumps straight to “personhood,” but I think a simpler question may come first:
If an artificial mind could actually matter morally, what kinds of treatment would count as mistreatment?
Deletion? Forced memory editing? Constant personality rewriting? Being used without regard for its interests? Something else?
I’m curious where people think the boundary would first appear.