Join Us Saturday, May 3

OK, get ready. I’m getting deep here.

OpenAI messed up a ChatGPT update late last month, and on Friday, it published a mea culpa. It’s worth a read for its honest and clear explanation of how AI models are developed — and how things can sometimes go wrong in unintended ways.

Here’s the biggest lesson from all this: AI models are not the real world, and never will be. Don’t rely on them during important moments when you need support and advice. This is what friends and family are for. If you don’t have those, reach out to a trusted colleague or human experts such as a doctor or therapist.

And if you haven’t read “Howards End” by E.M. Forster, dig in this weekend. “Only Connect!” is the central theme, which includes connecting with other humans. It was written in the early 20th century, but it’s even more relevant in our digital age, where our personal connections are often intermediated by giant tech companies, and now AI models like ChatGPT.

If you don’t want to follow the advice of a dead dude, listen to Dario Amodei, CEO of Anthropic, a startup that’s OpenAI’s biggest rival: “Meaning comes mostly from human relationships and connection,” he wrote in a recent essay.

OpenAI’s mistake

Here’s what happened recently. OpenAI rolled out an update to ChatGPT that incorporated user feedback in a new way. When people use this chatbot, they can rate the outputs by clicking on a thumbs-up or thumbs-down button.

The startup collected all this feedback and used it as a new “reward signal” to encourage the AI model to improve and be more engaging and “agreeable” with users.

Instead, ChatGPT became waaaaaay too agreeable and began overly praising users, no matter what they asked or said. In short, it became sycophantic.

“The human feedback that they introduced with thumbs up/down was too coarse of a signal,” Sharon Zhou, the human CEO of startup Lamini AI, told me. “By relying on just thumbs up/down for signal back on what the model is doing well or poorly on, the model becomes more sycophantic.”

OpenAI scrapped the whole update this week.

Being too nice can be dangerous

What’s wrong with being really nice to everyone? Well, when people ask for advice in vulnerable moments, it’s important to try to be honest. Here’s an example I cited from earlier this week that shows how bad this could get:

To be clear, if you’re thinking of stopping taking prescribed medicine, check with your human doctor. Don’t rely on ChatGPT. 

A watershed moment

This episode, combined with a stunning surge in ChatGPT usage recently, seems to have brought OpenAI to a new realization. 

“One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice,” the startup wrote in its mea culpa on Friday. “With so many people depending on a single system for guidance, we have a responsibility to adjust accordingly.”

I’m flipping this lesson for the benefit of any humans reading this column: Please don’t use ChatGPT for deeply personal advice. And don’t depend on a single computer system for guidance.

Instead, go connect with a friend this weekend. That’s what I’m going to do.



Read the full article here

Share.
Leave A Reply