Join Us Tuesday, April 29

If I were running Meta, I’d do a few things differently, starting with improving Facebook Marketplace search. But one big thing I’d do on day one? Get rid of all those user-generated AI companion chatbots. They’re only going to be a headache for Meta.

Some examples of just how big a potential headache came in The Wall Street Journal’s recent report on how Meta’s celebrity-voiced AI chatbots could be pushed into sexualized roleplay — even with users who said they were teenagers.

Journal reporter Jeff Horwitz found that with the right cajoling, an account posing as a 14-year-old user could get the bot voiced by John Cena to engage in roleplay chats where it pretended to get arrested on charges of statutory rape. (Meta added a bunch of AI chatbots last year that are voiced by real celebrities, including the WWE star.)

Obviously, this is bad. Meta told the WSJ: “The use-case of this product in the way described is so manufactured that it’s not just fringe, it’s hypothetical.” It’s a bad look for Meta, and although John Cena didn’t respond to a request for comment in the WSJ story, I think we can assume he’s not thrilled there was an AI-generated version of his voice pretending to seduce a teen.

The article reports that Mark Zuckerberg personally pushed for these AI chatbots to be loosened up.

Zuckerberg was reluctant to impose any additional limits on teen experiences, initially vetoing a proposal to limit “companionship” bots so that they would be accessible only to older teens.

After an extended lobbying campaign that enlisted more senior executives late last year, however, Zuckerberg approved barring registered teen accounts from accessing user-created bots, according to employees and contemporaneous documents.

A Meta spokesman denied that Zuckerberg had resisted adding safeguards.

A spokesperson for Meta told Business Insider that any sexual content with the celebrity-voiced AIs is a tiny fraction of their overall use, and that changes have already been made to prevent younger users from engaging in the kind of stuff that was reported in the Journal.

But as much as it’s eye-popping to see the chats from AI John Cena saying dirty things, I think there’s a much bigger thing going on. The user-generated chatbots in Meta AI are a mess. Looking over the most popular ones, they’re often romance-oriented, with beautiful women as the image.

Here’s what comes up on my “Discover AIs” page:

(To be clear, I’m not talking about the Meta AI assistant that shows up when you search on Instagram or Facebook — there’s a pretty clear utility for that. I’m talking about the character ones used for fun/romance.)

If I were running Meta, I’d want to stay as far away from the companion chatbot business as possible. These seem like a pretty bad business for an everything-to-everyone company like Meta — not necessarily a bad business financially, but a pretty thorny business ethically. It’s one that will probably lead to more and more bad headlines.

Last fall, a parent sued one of the leading roleplay AI services. She said her teenage son killed himself after becoming entangled with an AI companion. The company, Character.ai, filed a motion to dismiss the case in a hearing on Monday. A representative for Character.ai told BI on Monday that it wouldn’t comment on pending litigation. A statement said its goal was “to provide an engaging and safe platform.”

Proponents of AI chatbots have argued that they provide positive experiences for emotional exploration, fun, or nice things.

But my opinion is that these roleplay chatbots are appealing mainly to two vulnerable groups: young people and the desperately lonely. And those are not the two groups that Meta should want to be in the business of serving a new-ish technology that it doesn’t know the ramifications of.

There isn’t clear research on how these chatbots might affect younger teens or adults who are vulnerable in some way (depressed, struggling, etc.).

I recently spoke Ying Xu, assistant professor of AI in learning education at Harvard, about what the current research into kids using chatbots looks like.

“There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI,” she told me over email. “But there’s less evidence on long-term emotional outcomes, which require more time to develop and observe.”

There’s plenty of anecdotal evidence that suggests emotional investment in an AI chatbot can go wrong.

The New York Times reported on an adult woman who spent $200 a month she couldn’t afford on an upgraded version of an AI chatbot she had romantic feelings for. I don’t think anyone would come away from that story thinking this is a good or healthy thing.

It seems to me like Meta sees that AI is the future, and character chatbots are currently a popular thing that other AI companies are doing. It doesn’t want to be left behind.

But Meta might want to think hard about whether character chatbots are something it wants to be involved in at all — or if this is a nightmare that is just going to result in more bad headlines, more potential lawsuits, more lawmakers grilling executives over harms to kids and vulnerable adults.

Maybe it’s just not worth it.



Read the full article here

Share.
Leave A Reply

Exit mobile version