u/Apprehensive_Sky1950

I want to thank a politician! (Medicare Part D drug coverage)

I know this is kind of rare, but ...

Who passed the legislation a few years ago killing the Medicare Part D drug coverage "doughnut hole" and capping the out-of-pocket drug outlays at an annual $2,100 "catastrophic care" level? I have just been reviewing my Medicare EOB and I would like to thank those politicians!

reddit.com
u/Apprehensive_Sky1950 — 5 days ago

Current state of U.S. copyrightability of works produced with (not by) AI (and new court case!)

Here is a thumbnail sketch of the current state of U.S. copyrightability of creative works produced with AI.

Prolog: Works created solely by AI, where the copyright is requested to be granted solely in the name of the AI model itself, are not copyrightable. Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir 2025). Done deal, case closed.

Current Issue: Are works created by a human with AI assistance or AI processing (whether the human merely sets the AI in motion, merely queries the AI, or pursues some other, higher level of human involvement) eligible for copyright protection?

1. The Thaler case. It is not so that the Thaler case above ruled that humans who use AI cannot obtain a copyright. The Thaler case explicitly refused to address that issue. In that case, the human who interacted with the AI model first tried to obtain a copyright solely in the name of the AI model. When the court refused to do that, the plaintiff human attempted to change his request to instead have the copyright awarded to him as being the human who engaged the AI model. The Thaler appeals court ruled that the attempted change in claim came too late and so it explicitly refused to consider that new claim and issue.

2. The U.S. Copyright Office. The U.S. Copyright Office has adopted and enforces a policy of refusing to grant copyright registrations to works or those portions of works where AI processing preponderates over human activity and creativity. This agency position as a practical matter does control who does and does not obtain a U.S. copyright registration, but it does not have the force of law like a court ruling does.

3. The Allen Case. Unlike the Thaler case, there is a federal case that is actually working on the question of whether a human who interacts with an AI model to produce a creative work can obtain copyright protection for that work. The case is Allen v. Perlmutter, filed on September 26, 2024 in the District of Colorado, Case No. 1:24-cv-02665. This case involves a visual work (a picture) that the Midjourney AI model produced based on the human's extensive and iterative querying. This case is an appeal from the Copyright Office's refusal to grant a copyright registration on that work.

Most recently in this case, last August the plaintiff artist and this January the defendant Copyright Office each requested that the court rule in their favor and declare theirs is the correct legal position. This will be an important ruling, and the court has not rendered any decision yet.

The docket sheet for the Allen case can be found here.

4. The Suryast case. New case! On May 8, 2026 a new case federal case was filed on this issue and question. This new case is Suryast U.S. Enterprises, LLC v. Perlmutter, Case No. 2:26-cv-04999 in the Central District of California. Like Allen, this case is an appeal from the Copyright Office's refusal to grant a copyright registration on an AI-involved work. Here, the human artist took his own realistic landscape photograph of a sunset and then used the RAGHAV (Responsive Artificially Generated High-Art Visualizer) artificial intelligence painting application to “edit” or mix that photograph with the style of van Gogh’s “Starry Night” painting.

This case has just been filed and so of course nothing has happened with it. This case is interesting in that it arguably involves a higher degree of human involvement in the creative process than in the Allen case. Because the Allen case is so much farther along, it seems likely there will be a ruling announced in the Allen case that this case will then have to deal with.

Note: The artist hails from India; "Suryast" means "sunset" in Hindi, and "Raghav" is an Indian personal name.

The docket sheet for the Suryast case can be found here.

5: Note on U.S. federal court levels and rulings. Both the Allen case and the Suryast case are taking place in federal district courts, the lowest rung of the U.S. federal court system. Their rulings will be pioneering and important, but legal rules within U.S. law are generally not considered widely binding until they are announced by a federal appeals court, as the Thaler case was. Upon reaching rulings, one or both of the Allen case and the Suryast case will almost certainly be appealed, and then some durable, significant rulings will be announced at the appeals level.

There are thirteen Courts of Appeals in the U.S. system, each one heavily influenced by but independent of all the others. The Allen appeal and the Suryast appeal would each be heard by a different appeals court, and both of them are different from the appeals court that heard the Thaler appeal. (EDIT: correcting the appeals court flows.)

The U.S. Supreme Court rarely gets involved, but this issue might be so important as to get it involved here; that would seem most likely to happen after both the Allen case and the Suryast case obtained appellate rulings, especially if these rulings (and the Thaler ruling) conflicted with each other in some way.

~~~~~~~~~~~

If you're hungry for more, please visit my Wombat Collection on Substack that lists and briefly describes all the AI court cases and rulings (currently 500 of them).

reddit.com
u/Apprehensive_Sky1950 — 5 days ago

(I tried posting this in some scifi subs but they removed it, so I'm trying again here.)

Do you suppose that in the Star Trek: TOS evil parallel universe, Donald Trump is a brilliant, kind, generous man, beloved of his wife, who spends his time looking out for the little guy and trying to make the world a better place, always carrying around a copy of the U.S. Constitution in his back pocket?

reddit.com
u/Apprehensive_Sky1950 — 12 days ago
▲ 14 r/LawEthicsandAI+5 crossposts

It was established in the 1976 California court case of Tarasoff v. University of California that despite the confidentiality between a human therapist and his or her patient, if the therapist learns that the patient credibly plans to do harm to others, the therapist owes a legal "duty to warn" the potential victims or the authorities of that danger.

Does an AI therapist owe that same duty to warn? Does every chatbot owe that same duty, if a chatbot user's chatting establishes a credible threat? A new federal case has just been brought in California on the theory that they do.

To begin with, the confidentiality existing between an AI chatbot therapist and a human patient is not as strong as with a human therapist, and in many cases is not there at all. Court cases have recently held that conversations with public "retail" chatbots like the publicly available versions of ChatGPT, Grok, Claude, etc. are not confidential at all, because the chatbot purveyor can look in on those conversations at will. (If you're interested in that aspect and those cases, a discussion of that can be found here.) However, certain private "enterprise" versions or other specially closed-off versions of chatbots may still offer that confidentiality.

On April 29, 2026, two cases, Stacey v. Altman and M.G. v. Altman, were filed in a California federal court against OpenAI, alleging the chatbot ChatGPT-4o “played a role” in the Tumbler Ridge Mass Shooting in British Columbia in February 2026, in which eight people including six children were killed, twenty-seven more people were wounded, and the shooter committed suicide.

These are not the first court cases brought in which a chatbot company has been sued due to a user's suicide, or in once case even murder. However, those cases all alleged that the chatbot took a well-adjusted person and turned them suicidal or murderous. In this new case, the allegations are more limited, mostly just that the chatbot and its purveyor failed to warn authorities after a user displayed violence warning signs to the chatbot, to the point that the user’s account was terminated at one point, before the user was later allowed to reinstate an account. This is the classic Tarasoff pattern, but the "person" learning of the threat is not a human therapist but rather an AI chatbot. In neither these cases nor any of the prior cases was the chatbot held out specifically as an AI therapist, though in most all of the cases the conversations were personal and interactive in a way that might be considered as "therapy" or at least "therapeutic."

When I posted about one of these new case, u/MurkyStatistician09 asked:

>[A]t what point is the role of the chatbot the same as the role of Google in just giving shooters useful information? Policies to counteract this would slide uncomfortably into mass surveillance. Is Google obligated to call the police if you watch gun reviews and then ask for directions to a school?

This is a very good question. As far as I know, no one claims that Google owes a "duty to warn" after answering a particularly "dark" search query. But, is a user's interaction with a chatbot--any chatbot--every chatbot, regardless whether it is held out as rendering AI therapy, so different in character and extent from a Google search that a duty to warn arises for that chatbot that is not shared by an Internet search engine? The Stacey and M.G. cases may answer that question, in the next year or so.

These cases do not feel like an informal jab or a one-off. The Stacey plaintiff is a survivor of one of the victims killed in the mass shooting, and the M.G. plaintiff is one of the child victims of the shooting who survived but sustained grievous, permanent injuries. The plaintiffs' lawyers are a fairly large law firm located in several states that prides itself on its class action work (although these cases are not proposed as class actions). I would guess these cases are not going away easily or quickly. Most cases do settle without going to trial; however, sometimes a plaintiff and a plaintiff's legal team are out to make a point or "make new law" or establish a new practice area, and may be less interested in settling.

These cases have just been filed, and any significant developments will be posted in my Wombat Collection listing all the AI court cases and rulings.

The docket sheet for the Stacey case can be found here. The docket sheet for the M.G. case can be found here.

u/Apprehensive_Sky1950 — 3 days ago
▲ 7 r/AIsafety+1 crossposts

Two new cases, actually. Today, April 29, 2026, two new cases, Stacey v. Altman and M.G. v. Altman, were filed in a California federal court against OpenAI, alleging the chatbot ChatGPT-4o “played a role” in the Tumbler Ridge Mass Shooting in British Columbia in February 2026, in which eight people including six children were killed, twenty-seven more people were wounded, and the shooter committed suicide. The Stacey plaintiff is a survivor of one of the victims killed in the mass shooting, and the M.G. plaintiff is one of the child victims of the shooting who survived but sustained grievous, permanent injuries.

This is by far the largest disaster involving a chatbot to be alleged in court, the largest cases previously alleged having been one murder plus one suicide in one case, and an unexecuted plan for a mass murder in another case.

However, the alleged role of the chatbot here appears to be reduced compared to the allegations in previous cases. Unlike those other cases, where the chatbot was alleged to have taken a well-adjusted person and turned them suicidal or murderous, here the chatbot and OpenAI are faulted apparently to a lesser degree, more along the lines of a failure to warn authorities after a user displayed violence warning signs to the chatbot, to the point that the user’s account was terminated at one point, before the user was later allowed to reinstate an account.

The plaintiffs in these cases have not closed off the possibility of alleging a larger role for the chatbot, however. At one point in the complaint the plaintiff alleges the chatbot to have “facilitated or exacerbated” the disaster and at another point cites the chatbot’s encouraging nature and calls it “an encouraging co-conspirator.”

The docket sheet for the Stacey case can be found here. The docket sheet for the M.G. case can be found here.

Please see the Wombat Collection for a listing of all the AI court cases and rulings.

reddit.com
u/Apprehensive_Sky1950 — 15 days ago
▲ 2 r/AIDangers+1 crossposts

Today, April 29, 2026, a new case, Stacey, et al. v. Altman, et al. was filed in a California federal court against OpenAI, alleging the chatbot ChatGPT-4o “played a role” in the Tumbler Ridge Mass Shooting in British Columbia in February 2026, in which eight people including six children were killed, twenty-seven more people were wounded, and the shooter committed suicide.

This is by far the largest disaster involving a chatbot to be alleged in court, the largest cases previously alleged having been one murder plus one suicide in one case, and an unexecuted plan for a mass murder in another case.

However, the alleged role of the chatbot here appears to be reduced compared to the allegations in previous cases. Unlike those other cases, where the chatbot was alleged to have taken a well-adjusted person and turned them suicidal or murderous, here the chatbot and OpenAI are faulted apparently to a lesser degree, more along the lines of a failure to warn authorities after a user displayed violence warning signs to the chatbot, to the point that the user’s account was terminated at one point, before the user was later allowed to reinstate an account.

The plaintiff in this case has not closed off the possibility of alleging a larger role for the chatbot, however. At one point in the complaint the plaintiff alleges the chatbot to have “facilitated or exacerbated” the disaster and at another point cites the chatbot’s encouraging nature and calls it “an encouraging co-conspirator.”

The docket sheet for the case can be found here.

Please see the Wombat Collection for a listing of all the AI court cases and rulings.

reddit.com
u/Apprehensive_Sky1950 — 15 days ago
▲ 271 r/This_is_fascism+2 crossposts

In light of the renewed indictment against James Comey, I wish to make the following statement:

8647

I am happy to share my contact information with members of law enforcement.

Everyone else, feel free to comment as you see fit.

Thank you for your time and attention to this matter.

reddit.com
u/Apprehensive_Sky1950 — 16 days ago