u/Connect_Advantage_42

OpenAI was founded on a promise to benefit all of humanity and work openly. Eight years later, their most powerful models are completely closed off—no public weights, no disclosed training data, no independent auditing. They invoke that original mission constantly in fundraising, in Congress, everywhere. But if you can't verify it, does the mission actually mean anything?

Here's the thing: independent researchers can't audit these models for bias or safety failures. Educators and nonprofits in lower-income areas can't access the systems that now shape hiring, healthcare, and education. Other companies like Meta and Mistral have proven open models accelerate research—OpenAI's closure is holding back the entire field.

I started a petition asking OpenAI to commit to one concrete step: release flagship model weights 18 months after a new version ships. By then it's already outdated—the competitive cost is tiny, but the benefit to research and accountability is real.

Companies should live up to the standards they publicly claim to hold. OpenAI has leaned on that "open" framing for nearly a decade. Does anyone else think it's time to make it mean something? If this resonates with you, consider signing and sharing.

u/Connect_Advantage_42 — 14 days ago