- OpenAI published its first AGI principles framework since the 2018 Charter.
- Altman acknowledges the lab is “materially larger” and may need to sacrifice user empowerment for safety.
- The five principles arrive amid Pentagon criticism and a live trial over OpenAI’s nonprofit conversion.
OpenAI published a five-principle framework for artificial general intelligence on Sunday, the company’s most prominent public commitment since its 2018 Charter. The document, signed by CEO Sam Altman and titled “Our Principles,” pledges to resist the consolidation of AI power “in the hands of the few” and places democratization above every other priority. The principles cover democratization, empowerment, universal prosperity, resilience, and adaptability — in that order.
The timing is not accidental. OpenAI has spent months on the defensive over its Pentagon contract, which prompted its robotics chief to resign in March citing a lack of deliberation on guardrails. California passed the first state-level frontier AI safety law last year. And as jury selection begins in Oakland for the Musk v. Altman trial over OpenAI’s nonprofit-to-for-profit conversion, the company badly needed a document that frames its intentions in language the public and regulators can parse.
“We will resist the potential of this technology to consolidate power in the hands of the few,” Altman wrote in the post. “Key decisions about AI” should be made via “democratic processes and with egalitarian principles, and not just made by AI labs.” It is the most explicit concession to outside governance that OpenAI has put in writing.
What the Five Principles Actually Commit To
Each principle gets a paragraph. Democratization commits to keeping AI decisions subject to democratic processes rather than lab-only judgment. Empowerment promises users “very broad latitude” while reserving the right to minimize harm — including “catastrophic harm” and “minimizing non-catastrophic harm.” Universal prosperity calls for governments to “consider new economic models” and for OpenAI to keep building massive compute infrastructure. Resilience pledges Foundation resources for risks like bioweapons and cyber defense, with an explicit call for society-wide approaches: “No AI lab can ensure a good future alone.” Adaptability is the catch-all, acknowledging that positions will change as the technology evolves.
The most striking passage is under adaptability. Altman writes that OpenAI is “a much larger force in the world than it was a few years ago” and acknowledges scenarios in which the company “may have to trade off some empowerment for more resilience.” That is an unusual admission for a public framework document — essentially conceding that OpenAI’s own size has become a governance problem. As Implicator AI noted, the post revisits OpenAI’s 2019 decision to delay the GPT-2 release as a historical example of iterative deployment, a strategy the company now treats as its core safety mechanism.
The principles do not formally retire or replace the 2018 Charter, which remains posted on OpenAI’s site. The Charter set out commitments such as broadly distributed benefits, long-term safety, and cooperation on alignment. The new document is narrower in scope — it describes how OpenAI intends to behave, not what it owes the public. There is no enforcement mechanism, no independent oversight body, and no concrete threshold for when empowerment gets traded for resilience.
Regulatory Pressure From Both Sides
The framework arrives at a moment when OpenAI faces pressure from every direction. California’s frontier AI safety law, passed in 2025, was the first state-level attempt to impose liability on AI labs for catastrophic outcomes. At the federal level, the Trump administration has pushed for AI dominance rather than restraint. OpenAI just restructured its Microsoft partnership to end exclusivity while locking in a $250 billion Azure commitment. The company is simultaneously trying to appear responsible to regulators and indispensable to the Pentagon.
For anyone who has followed OpenAI’s trajectory from nonprofit research lab to $300 billion commercial entity, the principles read as a defensive document — one designed to give regulators and the public a reference point before formal legislation lands. Whether “democratic processes” means meaningful external oversight or a branding exercise depends entirely on what OpenAI does when those processes produce an answer it does not like. The 2018 Charter contained a clause about merging with a value-aligned organization if the original mission failed. That clause was never tested. These new principles will be.
The principles post is available on OpenAI’s website.
