subscribers. Become an Insider
and start reading now.
Have an account? .
- OpenAI is planning new ChatGPT safeguards after a lawsuit blamed the chatbot for a teen suicide.
- In a blog post, the company outlined several safety changes.
- The lawsuit alleges ChatGPT “actively helped” a 16-year-old explore suicide methods.
OpenAI said Tuesday it’s working on new safeguards for ChatGPT when handling “sensitive situations,” after a family filed a lawsuit blaming the chatbot for their 16-year-old son’s April death by suicide.
In a blog post titled “Helping people when they need it most,” the company outlined changes including stronger safeguards in long conversations, better blocking of harmful content, easier access to emergency services, and stronger protections for teens.
The lawsuit, filed by the parents of Adam Raine on Tuesday, accuses OpenAI of product liability and wrongful death, alleging that ChatGPT “actively helped Adam explore suicide methods,” NBC News reported.
OpenAI didn’t mention the Raine family or the lawsuit in its post, but wrote: “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable.”
This is a developing story. Please check back for updates.
Read the full article here