AI Deepfakes: The New Threat to Global Election Integrity

AI Deepfakes: The New Threat to Global Election Integrity

Artificial intelligence is becoming a game-changer in the world of election disinformation, making it possible for anyone armed with a smartphone and a sly mind to create convincing but utterly false content aimed at deceiving voters.

This is a vast leap from the state of affairs a few years ago when producing bogus photos, videos, or audio clips needed a group of people equipped with time, technical prowess, and finances. Now, thanks to free or economical generative AI services from tech giants like Google and OpenAI, high-quality “deepfakes” can be created by anyone with a straightforward text prompt.

These AI deepfakes have been circulating on social media for months, associated with elections in Europe and Asia, serving as a warning for the more than 50 countries heading for the polls this year.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” said Henry Ajder, a leading expert in generative AI based in Cambridge, England.

The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Ajder, who runs a consulting firm called Latent Space Advisory.

A Growing Threat to Democracy

As the U.S. presidential race heats up, even FBI Director Christopher Wray has voiced his concerns about the escalating threat, stating that generative AI makes it easier for “foreign adversaries to engage in malign influence.”

AI deepfakes can be used to tarnish or enhance a candidate’s image. Voters can be swayed towards or away from candidates, or even persuaded to avoid the polls entirely. But perhaps the most significant threat to democracy, according to experts, is the potential erosion of public trust in what they see and hear due to a surge of AI deepfakes.

Recent examples of such deepfakes include a video of Moldova’s pro-Western president endorsing a pro-Russian political party, audio clips of Slovakia’s liberal party leader discussing vote rigging and increasing beer prices, and a video of an opposition lawmaker in Bangladesh donning a bikini.

The Challenge of Tracking Deepfakes

The novelty and complexity of the technology make it challenging to pinpoint who is behind these AI deepfakes. Experts argue that governments and companies are not yet equipped to stop the onslaught, nor are they moving swiftly enough to address the problem.

“Definitive answers about a lot of the fake content are going to be hard to come by,” Ajder remarked.

As the technology advances, efforts to control AI deepfakes could potentially backfire. Well-intentioned governments or companies might inadvertently cross the thin line between political commentary and an “illegitimate attempt to smear a candidate,” cautioned Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.

A Call For Swift Action

While major generative AI services have rules to limit political disinformation, it remains too easy to outsmart the platforms’ restrictions or use alternative services without the same safeguards. Furthermore, even without malintent, the increasing use of AI can be problematic.

Many popular AI-powered chatbots continue to churn out false and misleading information that threatens to disenfranchise voters. Candidates could also attempt to deceive voters by claiming that real events portraying them in an unfavorable light were manufactured by AI.

“A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for a flourishing democracy,” said Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia. As the threat of deepfakes looms larger, the need for swift action and innovative solutions is greater than ever before.