Is nothing sacred anymore?

Reddit is one of the last places on the internet where posts and comments don’t feel like an endless pit of AI slop. But that is starting to change, and it’s threatening what Reddit says is its competitive advantage.

Reddit CEO Steve Huffman says that what keeps people coming back to the site is the information provided by real people, who often give thoughtful answers to questions. As the internet becomes saturated with AI-generated content, Huffman says that Reddit’s communities, curated and managed by real people, set it apart from other social media platforms.

“The world needs community and shared knowledge, and that’s what we do best,” Huffman told investors last week on an earnings call.

Traffic to Reddit has grown considerably over the past year, thanks in part to users Googling specifically for Reddit posts related to their questions.

Reddit’s business model has seen increased attention since the company went public in March of last year. Since then, Reddit has amped up advertising on its forums and inked deals with both OpenAI and Google to allow their models to train on Reddit content. In April, Reddit’s stock dipped after some analysts shared fears that the company’s success could be inextricably tied to Google Search.

“Just a few years ago, adding Reddit to the end of your search query felt novel,” Huffman said in a Q3 earnings call in February. “Today, it’s a common way for people to find trusted information, recommendations, and advice.”

But now, some Reddit users are complaining that the uniquely human communities the site is known for are being infiltrated by AI bots, or users relying on tools like ChatGPT to write their posts, which can often be spotted by the formatting. ChatGPT loves a bulleted list and an em-dash, and these days tends to be effusive in its positivity.

One user in the community r/singularity, which is dedicated to discussion about advancements of AI, recently flagged a post from what they believed was an AI-generated user spreading misinformation about the July 2024 attempted assassination of President Donald Trump.

“AI just took over Reddit’s front page,” the poster noted.

And on April 28, Reddit’s chief legal officer said the company was sending “formal legal demands” to researchers at the University of Zurich after they flooded one of the site’s communities with AI bots for a study. Moderators of the forum r/changemyview said in a post that researchers conducted an “unauthorized experiment” to “study how AI could be used to change views.”

The researchers who conducted the experiment said in a Reddit post that 21 of the 34 accounts they used were “shadow-banned” by Reddit, meaning the content they posted would not show up for others. But they said they never received any communication from Reddit regarding Terms of Service violations.

The moderators called the experiment unethical and said that AI targeted some users in the forum “in personal ways that they did not sign up for.” The post says the AI went to extreme lengths in some posts, including pretending to be a victim of rape, posing as a black man opposed to Black Lives Matter, and even posing as a person who received substandard care in a foreign hospital, among other claims.

“Psychological manipulation risks posed by LLMs is an extensively studied topic,” the community’s moderators wrote. “It is not necessary to experiment on non-consenting human subjects.”

A spokesperson for the University of Zurich told Business Insider that the school is aware of the study and is investigating. The spokesperson said that the researchers decided not to publish the findings of the study “on their own accord.”

“In light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” the spokesperson said.

For Reddit’s business strategy, which is largely focused on advertising and its belief that it provides some of the best research around because it’s based on real human reactions, the increased presence of AI on the platform is a threat. And Reddit has noticed.

On Monday, Huffman said in a Reddit post that the company would start using third parties to “keep Reddit human.” Huffman said that Reddit’s “strength is its people” and that “unwelcome AI in communities is a serious concern.”

“I haven’t posted in a while — and let’s be honest, when I do show up, it usually means something’s gone sideways (and if it’s not gone sideways, it’s probably about to),” Huffman said.

The third-party services will now ask users creating Reddit accounts for more information, like their age, Huffman said. Specifically, “we will need to know whether you are a human,” he said.

A spokesperson for Reddit told BI that the Zurich experiment was unethical and that Reddit’s automated tools flagged most of the associated accounts before the experiment ended. The spokesperson said that Reddit is always working on detection features and has already further refined its processes since the experiment came to light.

Still, some Reddit users say they are fed up with what they see as a “proliferation of LLM bots in the last 10 months.”

“Some of them mimic the most brain-dead of users, providing one-word responses with emojis at the end,” one user wrote. “They post with unnatural frequency, largely in subreddits known for upvoting just about anything.”



Read the full article here

Share.
Leave A Reply