The reaction to Sam Altman’s Tuesday announcement about coming changes to ChatGPT — especially the addition of erotica — caught the OpenAI CEO off guard.
Altman posted on X that the response to the changes “blew up on the erotica point much more than I thought it was going to!”
“It was meant to be just one example of us allowing more user freedom for adults,” he added.
Altman announced on Tuesday that by December, ChatGPT will be getting a spicy update, allowing it to go head-to-head with competitors like Grok at Elon Musk’s xAI on adult-themed generated content, including erotica.
The move, Altman clarified on Wednesday, will not roll back any of the chatbot’s existing policies related to mental health; instead, it aims to give adult users more leeway to use the tool as they wish.
“We are not the elected moral police of the world,” Altman said. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here.”
ChatGPT will continue to “prioritize safety over privacy and freedom for teenagers,” given the “significant protection” that minors need when engaging with the technology, Altman said.
Critics, including former “Shark Tank” star Mark Cuban, worry the planned age restrictions will do little to prevent children from accessing adult content.
“This is going to backfire,” Cuban wrote on X. “Hard. No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM.”
Altman also said that ChatGPT wil “treat users who are having mental health crises very differently from users who are not.”
The chatbot will not allow “things that will cause harm to others,” he added.
Altman did not specify any examples of potentially harmful content that would be prohibited from being generated. He also did not elaborate on how ChatGPT would determine if a user is having a mental health crisis or the difference in the types of responses it would provide to users in crisis.
OpenAI did not immediately respond to a request for more information.
“Without being paternalistic we will attempt to help users achieve their long-term goals,” Altman said.
Read the full article here