The year 2024 was dubbed the "super election year," with more than 4 billion people eligible to vote across dozens of countries. It was also the first global election cycle in which generative AI was widely available. According to research from the Alan Turing Institute and Harvard's Ash Center, more than 80% of countries with elections experienced observable instances of AI-generated content targeting their electoral processes (Ash Center, Harvard, 2024; Turing Institute, 2025).
The scale of AI-generated content creation was staggering. Content creation accounted for 90% of all observed AI use in elections — including synthetic audio, manipulated video, AI-generated text, and fabricated images. In countries as diverse as Bangladesh, France, South Africa, and Taiwan, AI was used to create content designed to mislead voters, defame candidates, or manufacture false endorsements (WEF, 2025).
However, researchers noted a surprising finding: the "apocalypse that wasn't." Despite widespread fears, less than 1% of all fact-checked misinformation during the 2024 election cycles was AI-generated, according to Meta. The most common use of AI was not sophisticated deepfakes but rather memes and content openly shared by politicians whose artificial origins weren't disguised. Threat actors, however, did integrate features from verified news sources — such as mimicking CNN formats alongside AI-generated images — to lend credibility to fabricated stories (NPR, 2024).
The deeper threat may be psychological. A US survey found that four out of five respondents expressed worry about AI's role in election misinformation. This widespread anxiety creates what researchers call the "liar's dividend" — the ability of bad actors to dismiss authentic evidence as AI-generated. When people can't trust what they see and hear, the entire information ecosystem is undermined, regardless of whether specific deepfakes are widespread (Brookings, 2024).
Looking ahead, the Turing Institute warns that while AI-enabled influence operations haven't yet demonstrably changed election outcomes, the technology is improving rapidly. The cost of producing convincing synthetic media is falling, while detection tools lag behind. Without robust media literacy education and technical countermeasures, the threat will only grow (Turing Institute, 2025).
Key Sources
- Harvard Ash Center (2024). The Apocalypse That Wasn't: AI in 2024's Elections.
- World Economic Forum (2025). Deepfakes Are Here to Stay.
- Alan Turing Institute (2025). From Deepfake Scams to Poisoned Chatbots: AI and Election Security.
- Brookings Institution (2024). When it comes to understanding AI's impact on elections.