Subscribe to our Newsletter

AI’s impact on elections is being overblown

Far from being dominated by AI-enabled catastrophes, this election “super year” at that point was pretty much like every other election year.

While Meta has a vested interest in minimizing AI’s alleged impact on elections, it is not alone. Similar findings were also reported by the UK’s respected Alan Turing Institute in May. Researchers there studied more than 100 national elections held since 2023 and found “just 19 were identified to show AI interference.” Furthermore, the evidence did not demonstrate any “clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”

This all raises a question: Why were these initial speculations about AI-enabled electoral interference so off, and what does it tell us about the future of our democracies? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology. 

First, mass persuasion is notoriously challenging. AI tools may facilitate persuasion, but other factors are critical. When presented with new information, people generally update their beliefs accordingly; yet even in the best conditions, such updating is often minimal and rarely translates into behavioral change. Though political parties and other groups invest colossal sums to influence voters, evidence suggests that most forms of political persuasion have very small effects at best. And in most high-stakes events, such as national elections, a multitude of factors are at play, diminishing the effect of any single persuasion attempt.

Second, for a piece of content to be influential, it must first reach its intended audience. But today, a tsunami of information is published daily by individuals, political campaigns, news organizations, and others. Consequently, AI-generated material, like any other content, faces significant challenges in cutting through the noise and reaching its target audience. Some political strategists in the United States have also argued that the overuse of AI-generated content might make people simply tune out, further reducing the reach of manipulative AI content. Even if a piece of such content does reach a significant number of potential voters, it will probably not succeed in influencing enough of them to alter election results.

Third, emerging research challenges the idea that using AI to microtarget people and sway their voting behavior works as well as initially feared. Voters seem to not only recognize excessively tailored messages but actively dislike them. According to some recent studies, the persuasive effects of AI are also, at least for now, vastly overstated. This is likely to remain the case, as ever-larger AI-based systems do not automatically translate to better persuasion. Political campaigns seem to have recognized this too. If you speak to campaign professionals, they will readily admit that they are using AI, but mainly to optimize “mundane” tasks such as fundraising, get-out-the-vote efforts, and overall campaign operations rather than generating new AI-generated, highly tailored content.

Fourth, voting behavior is shaped by a complex nexus of factors. These include gender, age, class, values, identities, and socialization. Information, regardless of its veracity or origin—whether made by an AI or a human—often plays a secondary role in this process. This is because the consumption and acceptance of information are contingent on preexisting factors, like whether it chimes with the person’s political leanings or values, rather than whether that piece of content happens to be generated by AI.

Leave a Reply

Your email address will not be published. Required fields are marked *