It’s Not the Algorithm. It’s the Strategist. AI, Deepfakes, and the Crisis of a Rusting Democracy

8 de mayo de 2025 IA

Minnesota has just launched a legal missile into the heart of one of the most urgent debates of our time: How do we protect democracy when reality itself is no longer verifiable?

The new state law prohibits the distribution of AI-generated images, audio, or video that appear real, are made without consent, and are intended to influence voters or harm a candidate—especially in the 90 days leading up to an election. Penalties include fines, imprisonment, and judicial orders to stop the spread of such content.

The backlash was immediate. X Corp, owned by Elon Musk, sued the state, arguing that the law violates the First Amendment and creates a dangerous precedent for censorship. The lawsuit claims the law is so vague that even memes, satire, and political parody could get swept up in the dragnet. Most dangerously for Musk: it sets a template other states—and even countries—could replicate.

But beyond the legal dispute, this case forces a much deeper question: AI isn’t inventing political deceit. It’s just scaling it.

Should we regulate the technology… or the ones misusing it?

We live in a time when many want to regulate what AI can or cannot do, as if it were an autonomous agent capable of manipulating public opinion by its own will. But we forget something fundamental: AI has no intention. The intent is always human.

The rationale behind this law is understandable—even admirable. But let’s zoom in on the logic: the law punishes the use of a tool based on its potential to deceive, while real-life politicians continue to manipulate, distort, and mislead daily—without consequence.

Where’s the law that penalizes emotional manipulation in debates? Where’s the fine for speeches filled with half-truths and hollow promises?

If an AI generates a fake video, the person who shares it could go to prison. But if a politician twists the truth and wins an election? They get a paycheck and a diplomatic passport.

The real threat is not artificial intelligence. It’s artificial ethics. And not the machine’s—ours.

Burning the Couch Doesn’t Fix the Infidelity

Regulating AI while ignoring unethical political strategy is like blaming a scalpel for murder—or burning your couch because your partner cheated on it.

We are obsessed with the tool, blind to the hands that wield it. And those hands belong to strategists, advisors, and candidates who push the ethical line “just a little” further every time… because it works. Because in today’s power game, winning trumps doing the right thing.

If Musk Loses, Truth Doesn’t Win. If Musk Wins, Truth Still Loses.

Here’s the paradox: no matter who wins the lawsuit, we all still lose.

If Musk loses, many will cheer, believing disinformation has been curbed. But manipulation will continue—just more analog, more emotional, and harder to trace. If Musk wins, it could open the floodgates to campaigns where truth becomes optional and lies are tactical.

But here’s the real problem: we’re already there.

No AI is needed to distort reality. Just a hungry candidate with a clever team and no moral compass.

So What Now?

Neither blanket censorship nor digital anarchy will save us. This isn’t a war between humans and artificial intelligence. It’s a collapse of political conscience.

We need to rebuild the architecture of trust with three urgent actions:

  1. Mandatory transparency in AI use for campaigns. Not a ban—just a label. If something was AI-generated, say it. Just like we disclose filters in advertising.
  2. Massive civic education in digital discernment. The best defense against manipulation isn’t law—it’s a lucid voter.
  3. Radical ethics in political strategy. Campaign advisors must stop being perception architects and return to what they should be: curators of truth.
The Real Risk Isn’t That We Vote for a Deepfake…

…it’s that we keep electing candidates who exist—yet build their rise to power on distortion, omission, and the normalization of what should be unacceptable.

At this point, the real challenge isn’t keeping AI from deceiving us. It’s daring to look at the humans who already are.

And demanding from them what no language model can fabricate: conscience.

Leave a Reply