Can we protect elections from artificial intelligence?
Of course we can, but we should focus on elections, not AI.
Yesterday morning we nearly choked on our Weetabix (cold milk, sprinkle of sugar) when Politico’s London Playbook landed in our inbox proclaiming “the government has published a trio of documents saying artificial intelligence will take us to hell in a handcart with sabotaged elections” (that being just one of a laundry list of future AI risks which may or may not come to pass in our lifetimes).
Ahead of next week’s global AI Summit at Bletchley Park, this sort of breathless rhetoric has been par for the course. The best minds of our AI generation have been ordered to Buckinghamshire to wargame “frontier AI risk” - the term the government uses to describe the existential ‘rise of the machines’ scenarios that seem to go hand in hand with the still-vague ‘transformative benefits’ of this new technology.
So (being us) we reviewed these new documents to see what they actually had to say about elections and data. The papers noted that “frontier AI can be misused to deliberately spread false information to create disruption, persuade people on political issues, or cause other forms of harm or damage”, and that “generative AI tools have already been shown capable of persuading humans on political issues and can be used to increase the scale, persuasiveness and frequency of disinformation and misinformation. More generally, generative AI can generate hyper-targeted content with unprecedented scale and sophistication.”
Yes, well, while that’s all *possible*, recent history has shown us you don’t need frontier AI, or indeed any AI, to bring some or all of those factors into play during an election campaign. AI could act as a force multiplier for the creation and dissemination of misinformation and disinformation, the manipulation of public opinion, the targeting of voters with personalised content created by intrusive exploitations of personal data, the hacking of election systems, and the disruption of electoral officials and monitoring organisations. And so the threats of generative AI are certainly worth thorough consideration. But to frame these electoral threats as AI issues, which can be met with AI solutions (conveniently provided by the vendors who have taken out stands in the summit exhibition hall), is to miss the point.
Next year will see at least 58 elections in 54 countries, including the US, India, Russia, across Europe, and almost certainly in the UK. Over two billion people will cast their votes. Like the orbiting of some great democratic comet, we won’t see another year with as many elections until 2048.
Each and every one of these elections has the potential to suffer the worst of what we’ve seen in politics and elections in recent years. AI could automate and increase some aspects of these problems. But it won’t create them from scratch. The disinformation and disruption machines aren’t yet pushing their own buttons - people are, including many electoral campaigns, and as we know all too well, many electoral candidates themselves. We shouldn’t forget that we already have a powerful tool to fight back against this, by trying to ensure campaigns that mislead us pay as heavy an electoral price as possible.
Beyond our own agency, there’s another layer that separates us, and can protect us, from bad actors. Disruptive, undemocratic content can’t travel alone. It needs to be chaperoned through the social networks and platforms we all use if we’re to see it, believe it, and be fooled by it.
If our social networks become flooded with this material will anyone want to use them any more? If your phone constantly rings with scams, is it sensible to answer it? Gmail became successful not because it stopped spam outright, but because it stopped it from hitting your inbox. If misleading generative AI content starts to circulate at scale, social networks are going to have to solve the problem, or they simply won’t function any more. It’s in the interests of platforms to try to stop an AI trickle turning into an AI flood. If they can’t, it’ll cost them their businesses.
We worry that the focus on frontier AI’s impact on democracy distracts us from the real preparatory work necessary to better secure forthcoming elections. Those preparations need to be technical, political, structural and legal. They need to look back to recent lessons learnt in the real world, rather than looking too far ahead. And those preparations need to remember that you don’t need a million AI bots to attempt to overthrow an election: you just need the right moment, and the wrong person.
Many of the ideas we need to ‘protect’ us from marauding AI threats are the same as the ones we’ve been calling for over the last few years - transparency, accountability, trying to avoid excessive targeting, personalisation and information overload. If we implement them now, we will erect some significant barriers against these threats.
We’re already preparing for next year’s run of elections (just like we have for the last six and a half years), backing up ideas about how to ensure the integrity of elections with data showing what politicians and parties actually do. It’s because of that experience that we’re not exactly losing sleep about AI, frontier or otherwise.
But we do worry that in the breathless political environment and hype around AI, our not being too worried about AI makes our argument too… boring to be heard.
And for us that’s a consequence of frontier AI that we will lose some sleep over.
Here’s a reminder that you can track what politicians and parties are actually doing with their online ad campaigns using our Trends tool, which tracks nearly 60,000 political advertisers in over 50 countries.
Thanks for reading. More from us soon.
Team Full Disclosure @ Who Targets Me