Join Us Monday, April 27

OpenAI just shared an update to its principles, and three differences stand out from the 2018 version.

In a blog on Sunday, CEO Sam Altman posted a list of five “principles” for the frontier model lab. While the company has shared smaller safety policy shifts and model specs over the years, the new set of principles is a major update to its guidelines since 2018.

OpenAI was founded in December 2015 as a nonprofit AI research organization. It was established in San Francisco by a group of founders, including Altman, Elon Musk, Greg Brockman, and Ilya Sutskever, some of whom left after disagreements over its transition to a closed-source, for-profit company.

Here are three key differences between the 2018 and 2026 documents:

1. Less emphasis on AGI

In the 2018 charter, guidelines around artificial general intelligence — highly autonomous systems that outperform humans at most economically valuable work — were in focus. AGI is often seen as a far-fetched north star for AI labs.

The 2018 document, which mentioned AGI 12 times, read: “To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.”

The 2026 document mentions AGI twice, focusing on AI across the board rather than AGI.

“This is an expansion of our long-held strategy of iterative deployment; we believe society needs to contend with each successive level of AI capability,” Sunday’s blog post read.

2. Pivot on competition

Sunday’s version of the objectives is a 180-degree shift from the company’s original guidelines on collaboration and avoiding competition with rival labs.

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions,” the 2018 guidelines read. “Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.”

The latest document skips mentions of sharing progress and stepping aside. It implicitly states that if needed, the company will prioritize being competitive over AI for everyone.

“We also acknowledge that OpenAI is a much larger force in the world than it was a few years ago, and we will be transparent about when, how, and why our operating principles change,” it said. “As a concrete example, while we are quite confident that universal prosperity will remain really important, we can imagine periods in the future where we have to trade off some empowerment for more resilience.”

Over the last few months, OpenAI has been in fierce competition with its younger rival, Anthropic, which has seen a sharp uptick in user and investor interest.

Anthropic’s recent highlights include releasing increasingly advanced Claude models — including the highly capable and tightly controlled Mythos. In February, its high-profile clash with the Pentagon also boosted its brand, and Claude downloads surged.

Earlier this month, Business Insider reported that investor demand has driven Anthropic’s valuation to around $1 trillion on secondary markets, overtaking OpenAI, which sits closer to the mid-$800 billion range.

3. Commitments got vaguer

Another key difference between the two documents is that OpenAI shifts from making commitments for itself to offering suggestions to the entire tech ecosystem.

Language in the original charter centered on “we will,” “we commit,” and “we expect,” repeatedly creating AI safety goals for OpenAI and its employees.

“Our primary fiduciary duty is to humanity,” the 2018 charter read.
“We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

The 2026 version talks about AI and its impacts more broadly.

It says that decisions around AI should be democratic, not in the hands of a few AI labs. The document recommends governments consider new economic structures, and says the world needs “huge” amounts of AI infrastructure to make AI affordable.



Read the full article here

Share.
Leave A Reply

Exit mobile version