Skip to content
Blog

Beyond disinformation and deepfakes

Tracking the broader uses of AI during election campaigns

Bartosz Maj

5 August 2024

Reading time: 14 minutes

Hand of a person casting a ballot at a polling station during voting.

In the lead up to the 2024 UK general election, a doctored video of the then Shadow Health Secretary, Wes Streeting, calling another politician a ‘silly woman’ was spread on social media. At the same time in the Brighton Pavilion constituency, ‘AI Steve’, an AI-based chatbot run by businessman Steve Endacott, was a candidate for MP.  Endacott said the chatbot would enable him to receive constant feedback and policy suggestions from his possible constituents, though he would personally attend Parliament for votes.

With almost 50% of the world’s population going to the polls this year, ensuring the integrity and trustworthiness of electoral processes is more important than ever.

Journalists, politicians and civil society organisations have expressed concerns around how the use of AI – which has become both more available and impactful – could negatively affect elections. The World Economic Forum recently stated that ‘misinformation and disinformation may radically disrupt electoral processes in several economies’, a risk which is ‘magnified by the widespread adoption of generative AI’.

However, the prevailing narrative about the impact of AI on elections has so far focused primarily on how AI could enhance specific kinds of disinformation, making reference to a small number of well-known examples – such as the deepfake of President Biden in the USA. This largely ignores other ways AI is already being used for purposes other than fabricating content to discredit individual candidates. For example, in Taiwan, AI has assisted researchers investigating disinformation networks on social media, whilst in Indonesia presidential candidates set up chatbots which attempted to inform users about their policies.

To explore this topic, the Ada Lovelace Institute developed and piloted a tracking tool to examine the use of AI across a small number of elections during the first six months of 2024: the legislative elections in Bangladesh, Pakistan, Portugal and South Africa, and both the legislative and presidential elections in Indonesia and Taiwan. We chose our case studies from a range of geographical regions and focused on the most populous countries.

For each election, we used the Python programming language to send a list of search terms to Google, using their Custom Search JSON API. We limited each search to a month before and after the election (except for South Africa, as we conducted our search shortly after the election) and received a spreadsheet of 100 results per country. We manually reviewed the spreadsheet to identify relevant cases, which we defined as a specific example of AI use. This produced a list of cases for each election, which we analysed for cross-cutting themes.

Our study is explorative rather than exhaustive. We only investigated a limited sample of countries and used news reports as our sources, meaning that our methodology faced limitations and biases. For instance, our search was unlikely to uncover all behind-the-scenes use cases and could surface only the topics that journalists deem more newsworthy (we include a more comprehensive coverage of the project’s limitations at the end of the blog).

This write-up offers an overview of the trends we identified. While more extensive research is needed, we hope it will support policymakers and journalists to broaden the range of examples they consider when examining the role of AI in elections.

What we found

While many of the examples of AI use that we came across were examples of content generation aiming at spreading disinformation, we noticed a variety of different kinds of disinformation and of different purposes beyond disinformation.

We traced five different macro-themes, each identifying a distinct aim: disinformation, countering disinformation, voter engagement, political research and producing campaign materials.

To effectively govern AI and prevent harms, it is important that policymakers pay attention to all these different uses of AI and how each of them may affect electoral processes in specific ways.

Disinformation

Different kinds of AI-enhanced disinformation were present in most of the countries we looked at. The wide range of disinformation types to which AI can contribute is a finding that matches those from existing research by organisations like the Alan Turing Institute and Demos, both of which highlight the variety of AI uses which may qualify as disinformation. The examples we identified fit into the following categories.

Disinformation designed exclusively to undermine specific candidates

In Bangladesh, a deepfake showed the leader of the Bangladesh Nationalist Party, Tarique Rahman, saying the party should ‘“keep quiet” about Gaza to not displease the US’. In another example, this time in Indonesia, a deepfake showed presidential candidate Anies Baswedan being criticised by a political supporter. In Taiwan, a deepfake video showed a presidential candidate accuse Lai Ching-te, the eventual winner of the election, of going to the United States for a ‘job interview’ and paying people to attend his welcome party.

AI-powered disinformation, discrediting specific politicians, came also in the form of personal attacks without clear political content. In Bangladesh, for example, deepfakes of candidates in swimsuits were posted online, whilst in Taiwan allegedly fake pornographic videos of legislators were published on foreign websites.

Misleading content on specific politicians

Examples of this type of disinformation were fake videos of Anies Baswedan and Prabowo Subianto, Indonesian presidential candidates, and the Indonesian President, Joko Widodo, giving speeches in languages that they are not fluent in.

Deepfakes of celebrities endorsing or criticising political candidates

Instances of this kind happened in South Africa, where deepfakes showed popular figures like Donald Trump, Joe Biden and rapper Eminem either endorse or criticise specific parties.

Attempts at voter suppression

Bangladesh-based fact-checking organisations BOOM and Dismislab identified three cases of this kind of disinformation. The first two were deepfakes, circulated on election night, that announced the withdrawal of specific candidates from the race. The third one showed Dhaka Police Chief call for an election boycott.

A similar case occurred in Pakistan, where deepfakes depicting prominent members of the Pakistan Movement for Justice party calling for an election boycott were spread online.

Generalised disinformation

Finally, some of the cases of disinformation we identified were organised campaigns, spreading multiple pieces of misleading or false content on a large scale, with the purpose of shaping public perception.

Taiwan saw AI-based social media campaigns in which groups uploaded AI-generated disinformation content. Sometimes, these videos included an AI news anchor avatar to add legitimacy, while bot social media accounts displayed AI-generated profile pictures of non-existent people, again with the aim of adding legitimacy. One of these groups was labelled as backed by the Chinese government, whilst Taiwan AI Labs found some of the groups they identified promoted narratives popular amongst media outlets affiliated with the Chinese state.

These strategies were somewhat consequential. Taiwan AI Labs, which studied disinformation during the May election, found that organised malign actors gained an advantage through the use of AI-powered tools, as these allowed them to overwhelm fact-checkers and influence public opinion on a large scale.

Countering disinformation

While AI has been commonly used to create or promote disinformation, there are also examples of it being used to counter or identify it.

In Taiwan, Gogolook’s chat application Auntie Meiyu enables users to add an AI-assisted product to a group chat to help identify disinformation. According to Gogolook, by 2022 Auntie Meiyu had over 520,000 users and had been used to fact-check 1.67 million pieces of content.

Cofacts, also a Taiwanese fact-checking organisation, has produced a chatbot that allows users to submit potential disinformation and have it fact checked by volunteers. The platform uses AI to classify the submitted content by topic, so that human fact checkers can identify the submissions which match their expertise, speeding up the process of allocating volunteers to user submissions.

Again, Taiwan AI Labs fine-tuned one of Meta’s LLaMa foundation models to extract information from online content, such as the names of political organisations or topics of discussion, and identify what kind of opinion the author was expressing about them. This allowed researchers to study trends in narratives and identify the kinds of content included in mass disinformation campaigns on social media.

Lastly, fact-checking organisation Africa Check created an AI avatar called Zanele, who became the protagonist of a number of TikTok videos, in which it scrutinised claims made by South African political parties. In this case, AI was used only to convey information by reading out a factual claim made by a party and labelling it as either correct or incorrect, but its use was intended to generate engagement around the idea of fact-checking during elections.

Voter engagement

Different kinds of AI-powered devices have been used to reach out to voters and improve their engagement with specific political parties and candidates.

For instance, platforms enabling voter engagement were popular during the Indonesian presidential elections. Prabowo Subianto, one of the candidates, launched PrabowoGibran.ai, where users could generate AI images of themselves with the candidate and share them on social media. Prabowo also released a chatbot that could answer policy-related questions. However, the chatbot was taken down after it made the false claim that there are seven principles to Indonesia’s state values instead of five. Another candidate, Anies Baswedan, launched a similar policy-related chatbot.

While media reports claimed that both chatbots were powered by OpenAI’s GPT models for political campaigning – which would go against the company’s rules of use – OpenAI denied the claim.

Lastly, an application programmed by Portuguese newspaper Público offered a unique example of voter engagement. Público staff used GPT-4 to extract policy proposals from party manifestos and then used RoBERTa, a foundation model developed at the University of Washington, to classify each policy by indexing them to one of 56 political topics. The policies were then displayed in a dating-app style interface in which users could swipe right or left to agree or disagree. The AI-powered software then compared the extent to which a user agrees with each of the eight parties fielding candidates in the elections.

Voter research

In a limited number of cases, AI technologies were used to collect and summarise information about voters. This offered political candidates real-time, tailored information about constituents allowing for more targeted campaigning.

Indonesia saw multiple uses of AI-based research platforms. The most significant example was Pemilu.AI, which according to the developer used GPT-4 and GPT-3.5 and was reportedly sold to over 700 local legislative candidates. The platform scrapes the web, collecting data on common discussion points on social media and online news outlets, previous electoral party performances and local economic indexes.

Using this information, the platform reportedly helps candidates ‘identify issues relevant to […] constituents’ and ‘what demographics [a] candidate should be targeting’, when producing their campaigning materials.

Similarly, Indonesian candidate Ganjar Pranowo deployed a dashboard using OpenAI models to ‘crawls for online data to predict talking points and offer real-time social-media alerts’ on his campaign.

Lastly, according to a member of the Pakistan Muslim League the party used ‘AI-based active social listening, across the entirety of the digital media landscape’ which allowed them to better target voters.

Campaign materials

A fifth theme we traced across our searches was the use of AI to generate campaign materials. These ranged from stylised or cartoon images designed to further specific narratives to deepfakes, clearly labelled as such, delivering political messages.

AI-generated campaign material was a common trend across all the countries we considered, but most prominent in Indonesia. Prabowo Subianto used Midjourney to generate a cartoon avatar of himself and his running vice-president. The two avatars were presented as more approachable to younger audiences and became a key feature of promotional efforts on media outlets and social media.

In a more unconventional example, a deepfake showed Indonesia’s ex-dictator Suharto being brought back to life to endorse a candidate in the country’s presidential race. The video was labelled as AI-generated, which means that it might not have been meant to mislead, but rather to appeal to the legitimacy and authority of historical figures, hoping that the public would associate them with contemporary politicians. This instance, however, raises questions about the ethics and politics of making individuals say things they cannot dispute, even when the content is transparently AI-generated. It is also unclear whether labelling content is sufficient when it can be easily reposted without context.

In South Africa, the Referendum Party posted AI-generated images depicting the country as a ‘failed state’ in an attempt at campaigning for an independence referendum for the Western Cape, against the Government ruled by the African National Congress party.

Lastly, in Pakistan, AI was used to allow Imran Khan, the de facto leader of the opposition, to communicate with voters despite being imprisoned and barred from the race. Khan’s team used ElevenLabs’ software to turn his notes, smuggled out of prison, into audio-speeches for online rallies.

Open questions

The discourse around the use of AI in elections has focused on AI’s ability to fuel disinformation and create an environment in which trust is eroded. While disinformation is a significant part of the equation, our pilot project shows AI is being deployed in a variety of ways and with different aims. These introduce new, distinct implications for democracy and require scrutiny.

Indeed, each of the thematic categories of use we have outlined is likely to have a unique set of answers to questions such as what is the impact of AI on electoral campaigns? What constitutes appropriate use of AI and how can inappropriate ones be countered?

For example, the impact of a debunked deepfake might be different to that of a political chatbot which generates an incorrect answer. The latter may be deemed a more legitimate extension of a political campaign and face less scrutiny, but it might have broader and more long-lasting consequences. Either way, each instance is likely to necessitate different safeguards.

Finally, beyond matters of accuracy of information, the key issue remains how AI is shaping and will shape our political norms, leaving us with broad but essential outstanding questions such as:

  • Does a language model that analyses social media change how politicians engage with people?
  • To what extent can a model effectively replace the need for political representatives to engage with their constituents?
  • When the foundations for these novel modes of engagement are provided by technology companies, what kind of power does that give them over our democratic processes?

Methodological limitations

It is important to acknowledge the methodological limitations of our pilot project.

We primarily searched the internet using English keywords, therefore biasing our findings towards cases covered by English-speaking outlets. This means that we might have both missed incidents covered only by local news outlets and presented findings according to the narratives of international organisations, possibly lacking local perspective. For some elections, we translated our keywords using Google Translate and read the results using the translate feature built into Google Chrome. However, this did not yield satisfactory outputs.

Our reliance on news reports further limited the research, as journalists are likely to favour public-facing uses of AI, such as deepfakes, over more ‘subtle’ applications. Additionally, with only a limited understanding of local news outlets it was difficult to decide on the credibility of some reports. We attempted to strike a balance between international news outlets, which we are more familiar with, and local organisations, which have a better grasp of the local context. Reports also varied in the amount of details they included, meaning that we could not cover examples that were credible enough, but were reported on only in a cursory way.

Finally, working with an API has implications for research integrity. Google’s search algorithm is proprietary and we cannot scrutinise it. This means that we had less control over the process than we would have using a custom web-crawling script.

Any further research will have to address these and other potential limitations to answer questions that our work can only begin to formulate.

Further reading

For readers hoping to stay up to date with uses of AI during elections throughout 2024, WIRED and Rest of World have set up trackers of AI uses during elections which allow readers to filter by parameters like country, platform or format.

This blog has focused on identifying trends in the use of AI in elections. If you’d like to read more research into this area, other organisations have produced work which takes different perspectives.