Officials piloting software to tackle deepfakes ahead of Scottish and Welsh elections
Essential brief
Officials piloting software to tackle deepfakes ahead of Scottish and Welsh elections
Key facts
Highlights
Election officials in the UK are actively collaborating with the Home Office to pilot new software designed to detect AI-generated deepfake videos and images ahead of the upcoming Scottish and Welsh elections. This initiative aims to counteract the potential misuse of deepfakes to spread misinformation or target candidates during election campaigns, which are set to begin in late March. The Electoral Commission in Scotland anticipates that the detection tools will be operational before campaigning starts, allowing for timely identification and response to any hoax content.
Sarah Mackie, the chief of the Electoral Commission in Scotland, explained that upon detecting a deepfake, officials would notify the police, the affected candidate, and the public. They would also request that social media platforms remove the misleading content. However, Mackie acknowledged that the software cannot guarantee 100% accuracy in detection. Currently, social media platforms are not legally obligated to take down such content, as removal remains voluntary. The commission is therefore advocating for legally enforceable "takedown" powers that would compel platforms to remove hoax material, urging the UK government to consider implementing such regulations.
While no deepfake incidents have been reported in UK election campaigns to date, their use has increased significantly in elections abroad, driven by the widespread availability of free AI image-generation tools. The UK has faced interference in elections and referendums through fake social media accounts, often state-sponsored by countries like Russia, Iran, and North Korea, which aim to sow discord and amplify controversy. In addition to combating deepfakes, the Electoral Commission is collaborating with the Scottish Parliament and police on a "safety and confidence" project to support women and minority ethnic candidates who experience abuse or harassment during campaigns. A 2022 study highlighted that about half of women candidates faced abuse, with many deterred from standing again, a trend also seen among minority ethnic candidates.
Mackie also raised concerns about the rise of AI-driven and pornographic "undressing" technologies, notably those linked to Elon Musk’s Grok AI platform. Such content, if used during elections, would be reported to the police. Musk’s platforms, including X and Grok, have faced criticism for inadequate removal of fake, pornographic, and harmful content, prompting calls from senior UK politicians for government and media regulator Ofcom intervention. Currently, there is no clear legal framework empowering the Electoral Commission or similar bodies to regulate deepfakes during elections, but the pilot project seeks to explore possible actions and responses.
If successful, the pilot could be expanded to cover all UK elections, marking a proactive step into an area with limited regulation. Mackie described the current regulatory environment as having many rules around the edges but lacking clear guidance in the core issues posed by AI-generated misinformation. The project aims to fill this gap by testing detection tools and sharing insights with relevant stakeholders. The Home Office has been approached for comments on the initiative. This effort reflects growing recognition of the challenges posed by AI technologies in democratic processes and the need for updated measures to safeguard election integrity.