Join Us Sunday, December 28

OpenAI wants to pay someone over half a million dollars to mitigate the downsides of AI.

If that seems like a lot of money, consider those potential downsides: job loss, misinformation, abuse by malicious actors, environmental destruction, and the erosion of human agency, to name a few.

CEO Sam Altman described the job as “stressful” in an X post on Saturday. “You’ll jump into the deep end pretty much immediately,” Altman wrote.

Altman said the “head of preparedness” position is “a critical role at an important time.”

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.

OpenAI’s ChatGPT has helped popularize AI chatbots among consumers, many of whom use the technology to research topics, draft emails, plan trips, or perform other simple tasks.

Some users also talk to the bots as an alternative to therapy, which has exacerbated mental health issues in some cases, encouraging delusions and other concerning behavior.

OpenAI said in October it was working with mental health professionals to improve how ChatGPT interacts with users who exhibit concerning behavior, including psychosis or self-harm.

OpenAI’s core mission is to develop artificial intelligence in a way that benefits all of humanity. It made safety protocols a central part of its operations from the outset. As it began releasing products, however, and pressure to turn a profit grew, some former staffers have said the company began to prioritize profit over safety.

Jan Leiki, the former leader of the now-dissolved safety team at OpenAI, said in a May 2024 post on X announcing his resignation that the company had lost sight of its mission to ensure the technology is deployed safely.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” he wrote. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Less than a week later, another staffer announced their resignation on X, also citing safety concerns. One former staffer, Daniel Kokotajlo, said in a May 2024 blog post that he resigned because he was “losing confidence that it would behave responsibly around the time of AGI.”

Kokotajlo later told Fortune that OpenAI initially had about 30 people researching safety issues related to AGI, a still theoretical version of AI that reasons as well as humans, but a series of departures reduced that head count by almost half.

The company’s former head of preparedness, Aleksander Madry, assumed a new role in July 2024. The position is part of OpenAI’s Safety Systems team, which develops safeguards, frameworks, and evaluations for the company’s models. The job pays $555,000 a year plus equity.

“You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline,” the job listing says.



Read the full article here

Share.
Leave A Reply