Why Foreign AI Disinformation is the Scapegoat for Homegrown Political Failure

Why Foreign AI Disinformation is the Scapegoat for Homegrown Political Failure

The panic over Iranian AI-generated disinformation is the ultimate political security blanket. It’s a convenient, high-tech phantom that allows the American political establishment to ignore the fact that the house is already on fire because of the people living inside it. Donald Trump’s accusations against Tehran aren't just a security alert; they are a masterclass in shifting the blame from systemic domestic polarization to a digital "Other" that is far less effective than the headlines suggest.

We are obsessed with the "how" of the message—Deepfakes! LLM-generated tweets!—while completely ignoring the "why" of the audience. If an AI-generated bot from a server in Mashhad can destabilize American democracy, it isn't because the AI is a tactical genius. It’s because the American electorate is already so fractured and cynical that it will believe anything that confirms its existing biases. We are blaming the match for a forest fire we spent forty years soaking in gasoline. Meanwhile, you can explore similar events here: The Logistics of Electrification Uber and the Infrastructure Gap.

The Myth of the Omnipotent Persian Bot

The prevailing narrative suggests that Iranian state actors are sitting in dark rooms, "unleashing" (to use a word the lazy pundits love) waves of sophisticated AI that can bypass human reason. This is a fantasy.

In reality, most foreign influence operations are amateurish. They suffer from the "uncanny valley" of cultural nuance. An LLM can mimic English grammar perfectly, but it cannot mimic the hyperspecific, localized grievances of a voter in Erie, Pennsylvania. It doesn't understand the subtle linguistic cues of American class resentment or the specific "inside baseball" of a primary race. To see the full picture, we recommend the excellent report by Mashable.

When intelligence reports cite "AI-driven interference," they are often referring to low-level automation: mass-producing generic comments or generating mediocre profile pictures for bot accounts. These aren't mind-control rays. They are digital junk mail. If your democracy is susceptible to digital junk mail, the problem isn't the mailman. It's the structural integrity of the building.

The Data Gap: Perception vs. Reality

Let’s look at the math. In the 2016 and 2020 cycles, the total spend of foreign "interference" via social ads was a rounding error compared to the billions spent by PACs, campaigns, and domestic special interest groups.

  • Domestic Spending: $14 billion+ in the 2020 cycle.
  • Foreign Interference Spending: Estimated in the low millions.

To suggest that a few thousand AI-generated images of a candidate in a fictional scandal can outweigh a multi-billion-dollar domestic propaganda machine is statistically illiterate. Yet, politicians lean into this narrative because it serves a dual purpose: it paints them as a victim of a foreign power and it justifies increased surveillance and censorship budgets.

If we want to talk about "disinformation," let’s talk about the 24-hour news cycle. Let’s talk about the algorithmic amplification of outrage by Silicon Valley platforms that are based right here in the U.S. These systems are infinitely more "pivotal" (another word for the bin) than anything Iran is cooking up. The algorithm doesn't care if a post is written by a human in Tehran or a teenager in Macedonia or a staffer in D.C. It only cares if the post keeps you scrolling.

The Sovereignty of the Mind

The "AI Disinformation" panic treats the American citizen as a passive, mindless vessel—a "tabula rasa" that is programmed by whatever it sees on a screen. This is a fundamentally elitist view of the world. It assumes that "the masses" are too stupid to discern truth from fiction, and therefore need a digital nanny to protect them from "foreign influence."

The reality? Most people don't believe things because they saw them on a bot's Twitter feed. They believe them because they want to believe them. Confirmation bias is a more powerful technology than any neural network ever devised.

Consider a thought experiment: Imagine a world where every foreign bot is magically deleted overnight. Would American political discourse become civil? Would the polarization vanish? Would the distrust in institutions evaporate? Of course not. The tension is baked into the geography, the economy, and the history. The AI is just a mirror.

Precision vs. Volume: Why GenAI is a Weak Weapon

The fear-mongers argue that Generative AI allows for "industrial-scale" disinformation. This is true, but volume is not the same as influence.

In marketing, we know that increasing the volume of ads without increasing the relevance leads to "ad blindness." The same applies to political propaganda. When the internet is flooded with AI-generated sludge, users don't become more radicalized; they become more skeptical of everything. We are entering an era of "post-truth" not because the lies are so good, but because the truth is so hard to find in the noise.

When everything is potentially fake, the only thing people trust is their "tribe." This pushes voters deeper into their silos, but it’s a process driven by a survival instinct, not by an Iranian algorithm.

Why the "Expert" Consensus is Laziness

If you read the reports from the usual think tanks, they all follow the same script:

  1. Identify a foreign adversary.
  2. Mention "Generative AI" and "Deepfakes."
  3. Demand more funding for "fact-checking" and "content moderation."
  4. Warn of an "existential threat" to democracy.

This is a self-sustaining industry. These experts aren't incentivized to tell you that the threat is minimal. They are incentivized to keep the threat level at "Code Red" so the grants keep flowing. They are "demystifying" (burn that word) a process that is actually quite simple: people like to argue, and the internet makes it easy.

The Real Threat: The "Truth" Arbiters

The danger isn't that Iran will use AI to lie to us. The danger is that we will use the fear of Iran’s AI to justify the creation of "Ministries of Truth."

When Trump or any other politician rails against foreign AI disinformation, the proposed solution is always more control over the digital town square. We see calls for AI-detection watermarks (which are easily stripped), mandatory ID for social media (which kills anonymity for dissidents), and aggressive de-platforming.

These "solutions" do more damage to a free society than a million Persian bots ever could. We are effectively saying: "To save democracy from AI lies, we must destroy the freedom of speech that allows people to be wrong."

Stop Fixing the Bots; Fix the Reality

If you want to neutralize the threat of foreign AI disinformation, you don't do it by building better filters. You do it by building a better country.

Disinformation thrives in the cracks of a crumbling society. It grows in the vacuum left by the death of local news, the decline of the middle class, and the erosion of the social contract. When people feel that their lives are getting worse and their leaders don't care, they become fertile ground for "alternative" narratives.

Instead of obsessing over whether a meme was generated by a GPT-4 instance in Tehran, we should be asking why that meme resonates with a 45-year-old father in the Rust Belt.

  • The Problem: Distrust in the electoral process.

  • The Fake Solution: Banning foreign AI bots.

  • The Real Solution: Transparent, verifiable, and consistent voting standards that leave no room for doubt.

  • The Problem: Viral medical misinformation.

  • The Fake Solution: Algorithmic censorship of "foreign" accounts.

  • The Real Solution: Rebuilding the credibility of public health institutions by admitting when they were wrong in the past.

The Insider's Truth

I’ve seen the "war rooms" where this stuff is tracked. It’s a lot of people staring at dashboards, flagging accounts with eight followers, and calling it a "victory for democracy." It’s theater. It’s a way for the administrative state to look busy while the foundation of the country continues to crack.

The "Foreign AI" narrative is a gift to the ruling class. It’s an external enemy they can point to whenever things get messy at home. It’s the 21st-century version of "The Red Scare," updated with more interesting jargon.

Don't buy the hype. Don't let them convince you that a LLM is the reason you can't talk to your uncle at Thanksgiving. The AI is a tool, and right now, the most effective way it’s being used is as a distraction from the uncomfortable truth:

We are doing this to ourselves.

Turn off the screen. Look at the person across the street. That’s where the "disinformation" ends—not in a data center, and certainly not because a politician told you he’s protecting you from the Ayatollah’s laptop.

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.