A new essay based on a paper in Science - 22 authors, including Gary Marcus - is warning about the next existential threat to democracy: AI bot swarms. The argument is that coordinated AI agents can manufacture “synthetic consensus” - making fringe views appear mainstream, fooling the public about what their neighbors actually believe.

The proposed solutions include proof-of-human credentialing, mandatory data access for researchers, and an “AI Influence Observatory” to detect when public discourse is being manipulated.

I don’t buy the framing. Not because AI swarms aren’t real, or because foreign influence operations aren’t concerning. But because the entire argument rests on an assumption the essay never proves: that online discourse once reflected something like real belief distribution.

It didn’t. And recognizing that changes everything about how to think about this problem.

What the Paper Gets Right

Before I tear into the framing, let me be clear about what’s not wrong here:

  • Scale and speed matter. The cost of coordinating thousands of accounts collapsed. That’s real.
  • Attribution got harder. Distinguishing human movements from synthetic ones is genuinely difficult now.
  • Foreign influence is a distinct threat. State actors with destabilization incentives are a legitimate counter-intelligence concern.

None of that is in dispute. The question is whether “AI swarms threaten democracy by manufacturing false consensus” is the right way to understand what’s happening. I don’t think it is.

The Premise That Falls Apart

The paper’s argument goes like this:

  1. Democratic deliberation requires independent voices
  2. AI swarms can fake independence by coordinating synthetic personas
  3. This creates “false consensus” that distorts public perception
  4. Therefore, AI swarms threaten democracy

Step 3 is where the logic collapses.

“False consensus” implies there’s a “real consensus” being distorted. Where exactly is this real consensus we’re measuring against? Twitter? Reddit? Facebook comments?

The entire argument assumes social media discourse was a meaningful proxy for public opinion before AI came along. It wasn’t. Social media users were never representative of the population. Political posting is extremely self-selected - most people don’t post political opinions online at all. And the people who do post are systematically different from those who don’t.

There was never a clean signal to corrupt.

Social Media Was Never a Signal

Here’s the thing: we actually can measure public opinion reasonably well. Traditional polling, for all its problems, works within known error bounds - typically 2-4 percentage points. The 2024 election had the most accurate state-level polling in 25 years. Polling isn’t perfect, but it’s not guesswork either.

Social media is different. And that matters, because social media is where AI swarms operate.

The paper worries that swarms can “make fringe views look like majority opinions.” But social media never reflected majority opinion in the first place. The research on this is damning:

  • Twitter users are systematically unrepresentative: younger, more urban, more politically extreme. The center-left is the most vocal segment, meaning most users see a left-skewed version of politics.
  • A small minority of users generate the vast majority of political content.
  • Reddit skews even harder. One content analysis found 99.1% of top political posts were pro-left/anti-right.
  • UK researchers found that Twitter and Facebook users “differ substantially from the general population on many politically relevant dimensions including vote choice, turnout, age, gender, and education.”

This isn’t subtle. Social media users are younger, more educated, more urban, and more politically extreme than the general population. The people who post about politics are an even more skewed subset.

Social media is good at one thing: making ideas visible. Trending topics, viral posts, ratio’d threads - these tell you what’s getting attention. They tell you nothing about how many people actually believe it. A hashtag can trend because 50,000 accounts pushed it or because 500 accounts pushed it really hard. From the outside, you can’t tell which.

Even if AI swarms can influence what topics dominate online discussion, that’s an agenda-setting problem - not evidence of false belief or fake consensus. Shaping what people talk about is different from manufacturing beliefs people don’t actually hold. The paper conflates them.

When someone claims a swarm-backed view represents “false” consensus, they’re implicitly claiming to know what the “real” social media consensus would be without the bots. But social media consensus was already disconnected from actual public opinion. You can’t corrupt a signal that was never there.

Coordination Isn’t New, Just Cheaper

The paper treats AI coordination as uniquely dangerous. But coordination is everywhere:

  • Media narratives that propagate across outlets within hours
  • Expert consensus that cascades through institutions
  • Political parties that coordinate messaging at scale
  • Advocacy groups that organize campaigns
  • Think tanks that fund and amplify specific ideas

None of this is considered illegitimate. It’s just called “politics” or “public relations” or “building a movement.”

So what makes AI coordination different?

The honest answer: it’s cheap. It’s accessible. It doesn’t require institutional backing.

The essay treats this as self-evidently bad. I’m not sure why. When coordination requires expensive infrastructure - PR firms, media relationships, institutional credentialing - it’s normal politics. When coordination becomes accessible to outsiders, suddenly it threatens democracy.

That’s not an argument about truth or falsity. It’s an argument about who gets to coordinate at scale.

The Part Where This Gets Uncomfortable

Let’s look at some recent consensus failures:

  • Hunter Biden’s laptop was real. The initial “Russian disinformation” framing was coordinated across media outlets and amplified by intelligence officials. That coordination was human, institutional, and wrong.
  • The lab leak hypothesis went from “conspiracy theory” to “plausible and worth investigating.” The initial suppression was coordinated by credentialed experts and mainstream platforms. Also human, institutional, and wrong.
  • Go back further: Iraq WMDs. The consensus was manufactured by institutions with access to the best information. It was catastrophically wrong.

None of these were bot-driven. They were elite, institutional consensus failures. And they were tolerated - even defended - because they came from accredited actors.

The uncomfortable implication: we already have systems that manufacture consensus. They’re called institutions. Sometimes they’re right. Sometimes they’re disastrously wrong. But they’re considered legitimate because of who operates them, not because of their track record.

AI swarms aren’t introducing manufactured consensus to a system that previously lacked it. They’re introducing competition in narrative production. I’m not saying that’s good - more noise in an already noisy system isn’t obviously a win. But it’s a different problem than “AI is creating false beliefs.” The beliefs may or may not be false. The narrative competition is what’s new.

What AI Actually Changes

Let me be clear about what’s actually new:

  • The cost of coordination on social media collapsed
  • Scale became accessible to outsiders
  • Narrative power on these platforms is no longer monopolized by institutions

What didn’t change:

  • Social media was never representative of public opinion
  • We still have good tools for measuring actual public opinion (polls)
  • Coordination has always existed - it’s called politics

This is a power shift, not an epistemic one. The question isn’t “how do we preserve the integrity of social media discourse?” - that integrity never existed. The question is “who gets to participate in shaping narratives at scale on these platforms?”

The paper’s proposed solutions - credentialing, observatories, researcher access - are about restoring gatekeeping. Whether that’s good or bad depends entirely on whether you trust the gatekeepers.

The Strongest Case: Foreign Influence

There is a version of this concern that I take seriously: foreign states operating outside U.S. accountability.

A Russian or Chinese influence operation has fundamentally different incentives than domestic political coordination. They’re not trying to persuade Americans of a position they genuinely hold. They’re trying to destabilize, to reduce trust in institutions, to create chaos that serves their geopolitical interests.

That’s a legitimate counter-intelligence concern. It justifies monitoring, attribution, and countermeasures.

But it’s a narrow case. It doesn’t justify broad policing of coordination. And the paper’s proposals don’t stay narrow - they target coordination generically, regardless of source or intent.

The Question Nobody Answers

Here’s what I keep coming back to: how do you distinguish illegitimate synthetic coordination from legitimate mass belief when both look identical online?

If a million accounts push a narrative, and it turns out they’re coordinated bots, that’s “synthetic consensus.” If a million accounts push a narrative, and they’re real humans who happen to agree, that’s democracy.

Online, these look the same. You can detect coordination patterns, sure. But coordination doesn’t prove falsity. A million real humans can coordinate too - that’s what movements are.

The obvious counter: “But swarms involve identity forgery, not just coordination.” True. Fake accounts pretending to be real people is different from a PAC running coordinated ads. But identity forgery complicates attribution - it doesn’t tell you whether the underlying belief is real or widely held. You still can’t infer from “these accounts are fake” to “this view is fringe.” Those are separate questions.

The paper’s solutions assume that detecting coordination is sufficient for identifying illegitimacy. It isn’t. Process tells you nothing about truth. Network analysis tells you nothing about whether the underlying belief is widely held.

The Risk of the Solutions

The proposed interventions:

  • “Proof-of-human” credentialing
  • AI Influence Observatories (independent, non-governmental, but empowered to publish “verified reports” of bot activity)
  • Mandatory researcher access to platform data

These sound reasonable in the abstract. In practice, they create infrastructure for gatekeeping.

Who decides what counts as “statistically unlikely coordination”? Who defines the thresholds? Who certifies the independent observatories?

The paper says the observatories should be “distributed” and “independent” - not a Ministry of Truth. Great. But distributed systems still need coordination. Independent groups still need funding. Someone decides which reports get amplified.

And here’s the predictable outcome: these systems will catch foreign operations sometimes. They’ll also flag domestic movements that happen to coordinate effectively. “Coordination” becomes a proxy for “speech we’re suspicious of.”

The first people harmed won’t be Russian bot farms. They’ll be domestic dissidents, grassroots movements, and anyone effective enough at organizing to trigger the detection algorithms.

There Was Never a Signal to Save

The paper’s framing assumes AI swarms are corrupting something that was previously working - that social media discourse was a meaningful input to democratic deliberation, and swarms are destroying that.

But social media never provided a reliable signal. It was always self-selected, always dominated by a vocal minority, always unrepresentative of actual population beliefs. We have good tools for measuring public opinion - they’re called polls, and they work reasonably well. Social media was never one of those tools. Using Twitter sentiment to predict elections is no better than a coin flip (50-63% accuracy). A meta-analysis of 74 studies found social media prediction accuracy was “on average behind the established benchmarks in traditional survey research.”

AI swarms aren’t corrupting a democratic signal. They’re adding noise to a channel that was already noise.

The real debate isn’t about truth versus falsity. It’s about who gets to speak at scale on these platforms - and whether the answer should be “only those with institutional backing” or “anyone with sufficient resources.”

That’s a political question, not an epistemic one. And the paper’s technical framing hides the political stakes.

The Actual Takeaway

AI bot swarms are real. Foreign influence operations are real. The cost of coordination has collapsed. All true.

But the framing of “synthetic consensus threatening democracy” assumes social media was a meaningful democratic signal to begin with. It wasn’t. We have actual tools for measuring public opinion - polls - and they work. Social media never did. The research is clear: platforms are dominated by unrepresentative users, political content comes from a tiny vocal minority, and sentiment analysis predicts elections no better than chance.

What AI changes is who can afford to participate in the noise on these platforms.

If you’re worried about that, be honest about why. Is it because you think institutions are more likely to be right? That’s a defensible position, but it’s a political claim, not an epistemic one. Is it because foreign actors have bad incentives? That’s narrow and addressable. Is it because coordination is inherently illegitimate? Then you have a problem with democracy itself, which runs on coordination.

The danger isn’t that AI can fake a consensus on social media. It’s that AI forces us to admit social media consensus never meant what we pretended it did.