Nefarious audio and video content made to trick voters present a clear and present danger to free and fair elections. Deepfake technology has advanced at a rapid speed as computer processing technology has become faster and cheaper and audio and video editing software has become universally available.
Labour has already been the victim of deepfakes. An appearance by Wes Streeting on Politics Live was manipulated to make it look as if he was being rude about Diane Abbott. Luke Akehurst too has seen video manipulated to smear him. But Labour are not alone. Recently elected Scottish First Minister, John Swinney, was a recent victim as well.
READ MORE: Second deepfake Labour video in two days as Streeting and Akehurst targeted
These advances in technology have already been deployed in the US primary elections with a fake robocall with a spoofed voice of Joe Biden nefariously made to confuse voters on what day the election was held all to suppress voter turnout amongst a targeted group of voters.
With the UK general election having been called, and the US presidential election coming into view, we need to be more aware than ever of the dangers of deepfakes. Political campaigns cannot stop the advancement of technology, instead they should embrace the new reality of modern campaigns and how dirty tricks have also evolved. Nonetheless, there are steps that every political campaigns (and concerned citizens) can take to minimise the chances of being thrown off course and being duped by a deepfake.
Spotting a deepfake
Deepfakes are realistic looking content created without consent. They can employ voices, videos, images to create online content to deceive people. They can cause significant harmful impacts on individuals being used to blackmail, harass, commit fraud, gain revenge and other purposes.
As AI advances, the quality of the deepfakes increases. It is not that the technology is inherently bad. Some businesses have recently experimented with using AI to send personalised messages to their staff. Similarly, some politicians have used the technology for light-hearted purposes such as creating online games involving the candidates.
But we have already seen examples of deepfakes being used in elections to try and trick voters. Joe Biden, Keir Starmer and Sadiq Khan have already been the victims of deepfakes. Examples from all parts of the world are increasing in frequency – Indonesia, France.
READ MORE: YouGov MRP poll: Full map and list of Labour candidates tipped to win
It is not just about video but audio as well. What could be better that a slightly poor quality ‘illicitly recorded’ phone conversation or comments from an event where a candidate says something outrageous? The more amateur the sound quality, the more damage it may do.
And with elections across the world, not least the US, EU, and UK, taking place this year, there is a focus on what can be done about the danger. According to a new survey from the BCS, The Chartered Institute for IT in the UK, the influence of deepfakes on the UK General Election is a concern for most tech experts. 65% of IT professionals polled said they feared AI generated fakes would affect the result.
But they also think the parties themselves will be involved “92% of technologists said political parties should agree to publicise when and how they are using AI in their campaigns.”
This suggests that they do not entirely trust politicians either…
What can be done
According to another survey, 70% of UK MPs fear deepfakes. There are regulatory and legislative solutions being suggested to deal with deepfakes but here and now there are actions that campaigns can take.
1) Avoid the void – problems arise when there is a space to fill. The more content that a campaign has, the more it can cover a wide range of topics, the less space there is for a deepfake to fill a void.
2) Deal with controversy – rather than failing to have a position on a difficult issue of the day, a campaign needs to tackle it. Again, this prevents a deepfake from being able to exploit an issue where there are firm views but political silence.
3) Consistency of approach – moving around too much on an issue opens space for deepfakes to exploit. The more an announcement looks out of the ordinary, away from the usual, the easier it will be to expose and challenge a deepfake.
4) Establish a dedicated unit for rapid response. All campaigns should have a team responsible for looking to monitor and correct any and all false and misleading information. Whether its coming from AI that created a DeepFake, or simply a misquoted statement and dealing with them. The more that responsibility is vague or unattributed, the less coherent and speedy the necessary response will be.
5) Call it out as soon as possible – the dedicated unit needs to have access to the latest detection software and be staffed by a team of experts. Critically, the deepfake needs to be challenged as soon as possible to prevent it from gaining traction. Sometimes, media relations advice will say do not publicise or give airtime to an opponent’s argument as it only raises its profile. But deepfakes are different, they need to be warned against.
6) Cross candidate / party consensus – as much as possible there should be a commonality of approach on deepfakes. All candidates and campaigns have an interest in tackling deepfakes. The more that some think they will gain through their distribution, the more likely they are to have an impact.
7) All candidates should take responsibility – dealing with deepfakes should not be seen as just the responsibility of a central campaign team. Every candidate runs a risk so there needs to be a local as well as a national focus.
8) Inform the media – journalists are aware of their responsibilities when it is dealing with deepfakes and will welcome knowing when examples are found.
9) Work with social media channels – campaigns should set up discussions with them in advance so that action can be immediate if examples are found. Establishing working protocols will help with speed.
10) Candidates must control their search results as well as the narrative around fakes and rumours. Remember today’s deepfakes and smear campaigns are known for dropping false and misleading information late in the campaign cycle, often close to election day. A team needs to think about whether it can get a credible newspaper to run an article about the accuracy. How fast can the campaign issue a statement and post it on to their website? Will enough voters even see the response? Campaigns need to be prepared to run search ads to direct curious citizens as well as contextual ads based on keywords around the deepfake to inform people to “be aware”.
Facing down deepfakes
The Internet has taught us to always run into the fire instead of away from it. These attacks are salacious enough to cause news stories about these tactics as well as spread organically by old-fashion word of mouth. Folks will be searching the gossiped rumour on their phone to learn more.
You must, therefore, think about what the search results look like. Search Engine Optimisation (SEO) matter to knock down misleading information. Is there a credible place that will be covering the campaign’s late breaking rumours? If there is not you may need to create your own campaign website similar to SNOPES, FactChecker, Politifacts, etc.
READ MORE: ‘Why Keir Starmer shouldn’t aim for a ‘knockout blow’ in the election debates’
When these types of services did not exist to quickly dismiss the rumours and misleading propaganda in Ukraine, young students created their own website called StopFake.org to become the transparent hub and debunk the flurry of rumours and misinformation with credible hyperlinked sourced facts. This concept is not new either. In 2008, Barack Obama’s campaign created FighttheSmears.com a website to address all of the rumours. This fact-based credible website was controlled by the campaign and indexed by Google and Yahoo on their top page of search results.
Unfortunately, the spreading of lies has become more advanced with using technology that morph’s the candidates voice and facial expressions. Deepfakes are the new reality, and their impact could bring major political harm and undermine democracy.
Campaigns must not hide their heads or pretend that this technology isn’t here and does not exist. All of us have a responsibility to take steps to take action against false and misleading fake advertisements from nefarious operators trying to cause chaos and sow discontent, cause confusion, or suppress voters. If that responsibility is not embraced, then we will all suffer the consequences.
Read more of our 2024 general election coverage here.
If you have anything to share that we should be looking into or publishing about this or any other topic involving Labour or about the election, on record or strictly anonymously, contact us at [email protected].
Sign up to LabourList’s morning email for a briefing everything Labour, every weekday morning.
If you can help sustain our work too through a monthly donation, become one of our supporters here.
And if you or your organisation might be interested in partnering with us on sponsored events or content, email [email protected].
More from LabourList
‘Musk’s possible Reform donation shows we urgently need…reform of donations’
Full list of new Labour peers set to join House of Lords
WASPI women pension compensation: Full list of Labour MPs speaking out as party row rumbles on