Spot the deepfake: The AI tools undermining our own eyes and ears – POLITICO Europe

Affordable and accessible generative AI tools have led to a rise in false video, images and audio on social media. Can you spot the difference?
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwidepresented by Luminate.
Have you ever seen a deepfake? More importantly, can you spot the difference between these AI-generated images, audio clips and videos and the real thing?
As more than 2 billion voters across 50 countries prepare for national elections in 2024, that question — and the ability of such deepfakes to skew potential voters’ decisions at the polls — has never been more critical. 
Case in point: In recent months, people have increasingly flagged alleged AI-powered deepfake images, audio and videos on X (formerly Twitter), according to a Brookings Institution review of the platform’s so-called community notes, a crowdsourced fact-checking initiative on the platform.
POLITICO decided to put you to the test. 
Using Midjourney, an AI research lab whose technology can create lifelike images based on someone simply typing suggestions into the company’s online platform, POLITICO collected a series of real images — and those generated by artificial intelligence. Repeated global trials have shown that, on average, people can detect digital forgeries compared with legitimate images about 60 percent of the time, according to tech company officials with whom POLITICO spoke.
While the technology is still a work in progress, the ability of anyone — including POLITICO reporters — to create such realistic images with a few clicks on a keyboard has politicians, policymakers and disinformation experts worried. 
If AI puts such power in the hands of anyone with a laptop, internet connection and $50 to access these powerful tools, such deepfake political content may flood people’s social media feeds in the months ahead.
How well will you do? Take the quiz before you read the rest of the story. (Spoilers below!)
<p>(Pick three.)</p>
<p>(Pick three.)</p>
<p>(Pick two.)</p>

Of the potential deepfake threats this year, cybersecurity and disinformation experts are most worried about audio.
So far, almost all contentious AI-generated images have been debunked within hours, mostly because of the power of social media to quickly crowdsource errors in these photos that are often otherwise imperceptible. Big Tech companies and independent fact-checkers, too, have prioritized finding and removing such harmful politically motivated falsehoods.
But audio — especially the AI-powered grainy clips that were unsuccessfully used to smear British Labour Party Leader Keir Starmer — remains uncharted territory. The disconnect between what people hear and what they see can often hoodwink individuals into believing that an inflammatory deepfake audio clip is legitimate.
To test that theory, POLITICO used off-the-shelf technologies — costing less than $50 in total to purchase — to see how easy it was to generate a deepfake audio clip. Initially, we were going to clone actual politicians. But as such falsehoods are both legally dubious and represent a direct threat to this year’s election cycle, we decided instead to mimic the voices of POLITICO reporters.
You can judge for yourself whether these AI-generated clips are good enough to fool you. 
The next frontier of AI deepfakes is video — especially content that can interact with humans in real-time.
And, when it comes to politically motivated AI-powered photos, a Soviet-era office block near the German-Polish border has become ground zero in demonstrating how that technology is evolving. 
There, amid a group of activists known as the Singularity Group, researchers created an ongoing, real-time online video debate between an AI-powered Joe Biden and an AI-generated Donald Trump.
The project, which has been running for almost nine months, uses so-called open-source technology, or AI models freely accessible to the public. It allows anyone to type in a debate question — through the Amazon-owned streaming service Twitch — and then the Biden/Trump bots power up, calculate an answer through Singularity’s AI systems, and spit it out, mimicking the politicians’ voices and images.
“Deepfakes are a real concern,” said Reese Leysen, one of the activists behind the project that — importantly — is labeled as a parody on Twitch. “We wanted to focus on politicians to make people take notice.”
POLITICO asked multiple real-world debate questions to the fake Biden and Trump. Most were either too racy or too filled with profanity to publish — not surprising, given that this AI system has been trained on random people asking it questions on the internet for almost a year.
But below are the two least-graphic videos. Is the technology perfect? Definitely not. But it’s a snapshot of where things are headed.
We asked fake Donald Trump and Joe Biden a few real-world debate questions. Here’s how they answered.
Prompt question No. 1: Which Disney character best represents your political opponent, and why?
Prompt question No. 2:  If you were to win the November election, how would you resolve the Russia-Ukraine war?
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwidepresented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.


Leave a Reply

The Future Is A.I. !
To top