r/LawEthicsandAI

▲ 17 r/LawEthicsandAI+5 crossposts

It was established in the 1976 California court case of Tarasoff v. University of California that despite the confidentiality between a human therapist and his or her patient, if the therapist learns that the patient credibly plans to do harm to others, the therapist owes a legal "duty to warn" the potential victims or the authorities of that danger.

Does an AI therapist owe that same duty to warn? Does every chatbot owe that same duty, if a chatbot user's chatting establishes a credible threat? A new federal case has just been brought in California on the theory that they do.

To begin with, the confidentiality existing between an AI chatbot therapist and a human patient is not as strong as with a human therapist, and in many cases is not there at all. Court cases have recently held that conversations with public "retail" chatbots like the publicly available versions of ChatGPT, Grok, Claude, etc. are not confidential at all, because the chatbot purveyor can look in on those conversations at will. (If you're interested in that aspect and those cases, a discussion of that can be found here.) However, certain private "enterprise" versions or other specially closed-off versions of chatbots may still offer that confidentiality.

On April 29, 2026, two cases, Stacey v. Altman and M.G. v. Altman, were filed in a California federal court against OpenAI, alleging the chatbot ChatGPT-4o “played a role” in the Tumbler Ridge Mass Shooting in British Columbia in February 2026, in which eight people including six children were killed, twenty-seven more people were wounded, and the shooter committed suicide.

These are not the first court cases brought in which a chatbot company has been sued due to a user's suicide, or in once case even murder. However, those cases all alleged that the chatbot took a well-adjusted person and turned them suicidal or murderous. In this new case, the allegations are more limited, mostly just that the chatbot and its purveyor failed to warn authorities after a user displayed violence warning signs to the chatbot, to the point that the user’s account was terminated at one point, before the user was later allowed to reinstate an account. This is the classic Tarasoff pattern, but the "person" learning of the threat is not a human therapist but rather an AI chatbot. In neither these cases nor any of the prior cases was the chatbot held out specifically as an AI therapist, though in most all of the cases the conversations were personal and interactive in a way that might be considered as "therapy" or at least "therapeutic."

When I posted about one of these new case, u/MurkyStatistician09 asked:

>[A]t what point is the role of the chatbot the same as the role of Google in just giving shooters useful information? Policies to counteract this would slide uncomfortably into mass surveillance. Is Google obligated to call the police if you watch gun reviews and then ask for directions to a school?

This is a very good question. As far as I know, no one claims that Google owes a "duty to warn" after answering a particularly "dark" search query. But, is a user's interaction with a chatbot--any chatbot--every chatbot, regardless whether it is held out as rendering AI therapy, so different in character and extent from a Google search that a duty to warn arises for that chatbot that is not shared by an Internet search engine? The Stacey and M.G. cases may answer that question, in the next year or so.

These cases do not feel like an informal jab or a one-off. The Stacey plaintiff is a survivor of one of the victims killed in the mass shooting, and the M.G. plaintiff is one of the child victims of the shooting who survived but sustained grievous, permanent injuries. The plaintiffs' lawyers are a fairly large law firm located in several states that prides itself on its class action work (although these cases are not proposed as class actions). I would guess these cases are not going away easily or quickly. Most cases do settle without going to trial; however, sometimes a plaintiff and a plaintiff's legal team are out to make a point or "make new law" or establish a new practice area, and may be less interested in settling.

These cases have just been filed, and any significant developments will be posted in my Wombat Collection listing all the AI court cases and rulings.

The docket sheet for the Stacey case can be found here. The docket sheet for the M.G. case can be found here.

u/Apprehensive_Sky1950 — 2 days ago