Ruhr-Uni-Bochum
HGI

Copyright: HGI, stock.adobe.com: chinnarach

AI “swarms” could fake public consensus and quietly distort democracy

A Science Policy Forum article warns.

Copyright: AdobeStock/InfiniteFlow

A new Science Policy Forum article warns that the next generation of influence operations may not look like obvious “copy-paste bots,” but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale. In this week’s journal, the authors describe how the fusion of large language models (LLMs) with multi-agent systems could enable “malicious AI swarms” that imitate authentic social dynamics—and threaten democratic discourse by counterfeiting social proof and consensus.

The article argues that the central risk is not only false content, but synthetic consensus: the illusion that “everyone is saying this,” which can influence beliefs and norms even when individual claims are contested. This risk compounds existing vulnerabilities in online information ecosystems shaped by engagement-driven platform incentives, fragmented audiences, and declining trust.

A malicious AI swarm is a network of AI-controlled agents that can hold persistent identities and memory; coordinate toward shared objectives while varying tone and content; adapt to engagement and human responses; operate with minimal oversight; and deploy across platforms. Operating with minimal oversight and spreading across platforms, these systems can generate diverse, context-aware content that still moves in lockstep—making them far more difficult to detect than traditional botnets.

"In our research during COVID-19, we observed misinformation race across borders as quickly as the virus itself. AI swarms capable of manufacturing synthetic consensus could push this threat into an even more dangerous realm.”, says Prof. Meeyoung Cha, a scientific director at the Max Planck Institute for Security and Privacy in Bochum.

Instead of moderating posts one by one, the authors argue for defenses that track coordinated behavior and content provenance: detect statistically unlikely coordination (with transparent audits), perform stress tests for social media platforms via simulations, offer privacypreserving verification options, and share evidence through a distributed AI Influence Observatory—while also reducing incentives by limiting monetization of inauthentic engagement and increasing accountability.

 

Original publication

Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, and Jonas R. Kunst
How malicious AI swarms can threaten democracy
Science Magazine
Source

 

Press contact

For questions related to the paper or Prof. Cha’s quote please contact Maria-Bianca Leonte: pr[at]mpi-sp.org

 

General note: In case of using gender-assigning attributes we include all those who consider themselves in this gender regardless of their own biological sex.