The AEC wants to stop AI and misinformation. But it’s up against a problem that is deep and dark

From the moment you open your social media feed, you’re stepping into a digital battleground where not all political messages are what they seem.

The upcoming federal election will see an influx of deepfakes, doctored images, and tailored narratives that blur the line between fact and fiction.

Last week, the Australian Electoral Commission (AEC) relaunched its Stop and Consider campaign. The campaign urges voters to pause and reflect, particularly regarding information about how to vote. But its message applies to all forms of misinformation.

‘Stop and Consider’ factsheet.
Australian Electoral Commission

AEC Commissioner Jeff Pope warns:

A federal election must be held in the next few months, so now is the perfect time to encourage all Australians to have a healthy degree of scepticism when it comes to what they see, hear or read.

The simple directives outlined in this campaign are designed to slow the spread of misleading information in a digital age where algorithms boost engagement at speed.

So how effective is it likely to be in helping voters sift the real from the fake? While the campaign benefits from the AEC’s credibility and its accessible message, it also faces significant hurdles.

Digital deception in action

In 2024, AI made a notable impact on international political campaigns.

In the US, the Federal Communications Commission fined a political consultant $6 million for orchestrating fake robocalls that featured an AI-generated deepfake of President Joe Biden’s voice.

During India’s 2024 election, Meta (which owns Facebook) approved AI-manipulated ads spreading disinformation and hate. This exacerbated divisive narratives and failing to regulate harmful content.

Meanwhile, the Australian Labor Party deployed an AI-generated video of opposition leader Peter Dutton as part of its online efforts.

Additionally, the Liberal Party has again engaged duo Topham Guerin, who are known for their use of AI and controversial political tactics.

Political leaders are increasingly turning to platforms like TikTok to attract votes. But one of the problems with TikTok for users is that it encourages endless scrolling and can cause users to miss subtle inaccuracies.

Adding to these concerns is a recent scam in which doctored images and fabricated celebrity headlines were circulated. It created the illusion of legitimacy and defrauded many Australians of their money.

These incidents are a stark reminder of how quickly digital manipulation can mislead, whether in commercial scams or political messaging.

Sophie Monk was one of the celebrities who featured in a recent online scam.
Daily Mail

But are we taking it seriously?

South Korea has taken a decisive stance against AI-generated deepfakes in political campaigns by banning them outright. Penalties include up to seven years in prison or fines of 50 million won (A$55,400). This measure forms part of a broader legal framework designed to enforce transparency, accountability, and ethical AI use.

In Australia, teal independents are calling for stricter truth in political advertising laws. The proposed laws aim to impose civil penalties for misleading political ads, including disinformation and hate speech.

However, combating misinformation created by anonymous or unknown parties, such as AI-generated deepfakes, remains a challenge that may require further regulatory measures and technological solutions.

All of this is unfolding at a time when the approach to fact-checking is itself in flux. In January, Meta made headlines by scrapping its third-party fact-checking program in the US. This was done in favour of a “community notes” system. The change was championed by CEO Mark Zuckerberg as a way to reduce censorship and protect free expression.

However, critics warn that without independent oversight, misinformation could spread more easily, potentially leading to a surge in hate speech and harmful rhetoric. These shifts in digital policy only add to the challenge of ensuring that voters receive reliable information.

So, will the AEC’s campaign have any effect?

Amid these challenges, the “Stop and Consider” campaign arrives at a critical moment. Yet despite scholars’ repeated calls to embed digital literacy in school curriculums and community programs, these recommendations often go unheard.

The campaign is a positive step, offering guidance in an era of rapid digital manipulation. The simple message – to pause and verify political content — can help foster a more discerning electorate.

However, given the volume of misinformation and sophisticated targeting techniques, the campaign alone is unlikely to be a silver bullet. Political campaigns are growing ever more sophisticated. With the introduction of anonymous deepfakes, voters, educators, regulators, and platforms must work together to ensure the truth isn’t lost in digital noise.

A robust foundation in digital literacy is vital. Not only for this campaign to work but to help society distinguish credible sources from deceptive content. We must empower future voters to navigate the complexities of our digital world and engage more fully in democracy.

Globally, diverse strategies provide valuable insights.

While Australia’s “Stop and Consider” campaign takes a reflective approach, Sweden’s “Bli inte lurad” initiative is refreshingly direct. It warns citizens: “Don’t be fooled.”

By delivering clear, actionable tips to spot scams and misleading content, the Swedish model leverages its strong tradition of public education and consumer protection.

This no-nonsense strategy reinforces digital literacy efforts. It also highlights that safeguarding the public from digital manipulation requires both proactive education and robust regulatory measures.

It may be time for Australian regulators to act decisively to protect the integrity of democracy. Läs mer…