Join Us Thursday, August 14

A team of researchers gave AI bots their own social platform — and it turned toxic.

The chatbots split into cliques and boosted the most partisan voices. A handful of “influencers” also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam.

The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI’s GPT-4o mini, each assigned a distinct persona, including specific political leanings.

The personas were drawn from the American National Election Studies dataset, and reflected “real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests,” the researchers said.

They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in “the same qualitative patterns.”

The study was led by Dr. Petter Törnberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university.

The researchers, OpenAI, Meta, and DeepSeek, did not respond to a request for comment from Business Insider.

Even without algorithms and humans, the same toxic patterns emerged

Over the course of five separate experiments — each running over 10,000 actions — the bots were free to post, follow, and repost. What happened looked a lot like real-world social media.

The study found that the chatbots gravitated toward others who shared their political beliefs, forming tight echo chambers. Partisan voices gained an outsize share of attention, with the most extreme posts attracting the most followers and reposts. Over time, a small group of bots came to dominate the conversation, much like the influencer-heavy dynamics seen on platforms like X and Instagram.

The researchers also tested six interventions meant to break the polarization loop, including a chronological feed, downranking viral content, hiding follower counts, hiding user bios, and amplifying opposing views.

None solved the problem. “While several showed moderate positive effects, none fully addressed the core pathologies, and improvements in one dimension often came at the cost of worsening another,” the researchers said.

“Our findings challenge the common view that social media’s dysfunctions are primarily the result of algorithmic curation,” the authors wrote.

“Instead, these problems may be rooted in the very architecture of social media platforms: networks that grow through emotionally reactive sharing,” they added.

The researchers said their work is among the first to use AI to help advance social science theory.

While LLM-based agents can provide “rich representations of human behavior” for studying social dynamics, the researchers cautioned that they remain “black boxes” and carry “risks of embedded bias.”

Not the first AI social network experiment

The study isn’t the first time researchers have tested what happens when AI bots populate an online space.

In 2023, Business Insider reported on an experiment also led by Törnberg, in which 500 chatbots read the news and discussed it on a simulated social media platform.

That project used ChatGPT-3.5 to build bots for a very specific purpose: to explore how to design a less polarized, less toxic version of current social networks. The researchers created a social network model in a lab to test whether it was possible to encourage cross-partisan interaction without fueling hostility.

“Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?” Törnberg asked at the time.

In both studies, chatbots served as stand-ins for people, with researchers tracking their interactions to better understand how users might behave online.

Big Tech has also tested similar approaches.

In July 2020, Facebook introduced a walled-off simulation of itself, populated with millions of AI bots, to study online toxicity.



Read the full article here

Share.
Leave A Reply

Exit mobile version