Audio Deepfakes: The New Frontier In Election Disinformation

Audio deepfakes are becoming a key tool in election disinformation, with AI technologies enabling realistic voice clones.

Are audio deepfakes threatening the elections?

Audio deepfakes are increasingly becoming a tool for spreading disinformation in elections worldwide. Recently, New Hampshire’s attorney-general announced an investigation into a robocall using an artificial voice resembling President Joe Biden, urging voters not to participate in the state's presidential primary. Researchers have identified similar incidents involving synthetic audio in the UK, India, Nigeria, Sudan, Ethiopia, and Slovakia, signaling a growing trend in the use of AI-powered voice-cloning tools.

Deepfakes Disrupt Bangladesh’s Upcoming Election
Rising use of deepfakes in political campaigns exposes vulnerabilities in election integrity worldwide.

The Emergence Of Sophisticated AI Voice-Cloning Technologies

The proliferation of advanced AI tools from companies like ElevenLabs, Resemble AI, Respeecher, and Replica Studios has made it easier to create convincing audio deepfakes. Microsoft's VALL-E, capable of cloning a voice from just three seconds of recordings, exemplifies this technological advancement. Henry Ajder, an AI and deepfake expert, emphasizes the increased vulnerability to audio manipulation due to the public's lack of awareness, compared to visual content.

The Impact And Challenges Of Audio Deepfakes

The surge in audio deepfakes poses significant challenges for detecting and regulating such content. High-profile cases like TikTok accounts mimicking news outlets with AI-generated voices and a network uncovered by NewsGuard illustrate the reach of these deepfakes. ElevenLabs, founded by former Google and Palantir employees, offers a range of AI audio generation tools, shifting synthetic audio from robotic to natural tones. The market for text-to-speech tools has expanded, with companies like Voice AI and Replica Studios offering services for various applications. However, the lack of regulation and difficulties in detecting the original source of deepfakes raise concerns about their potential misuse.

White House responds to fake ‘Biden’ robocall
STORY: The White House confirmed that the call was not recorded by Biden Monday (January 22), and said the incident highlights the challenges emerging technologies present.“The president has been clear that there are risks associated with deep fakes. Fake images and misinformation can be exacerbated by emerging technologies,” White House press secretary Karine Jean-Pierre told reporters in Washington.Support for Biden’s write-in campaign will be closely-watched amid weak polls for the 81-year-old president, although the results have no bearing on the Democratic Party’s nominating contest.

The Need For Effective Detection And Regulation

In response to the escalating use of audio deepfakes, companies and platforms are developing technologies to counter disinformation. Microsoft, ElevenLabs, and Resemble are working on detection tools and ethical guidelines, while cybersecurity firms like McAfee have introduced detection technologies like Project Mockingbird. Online platforms such as Meta and TikTok are also investing in labelling and detection capabilities. Policymakers and experts emphasize the urgency of implementing protective measures to address the growing threat of audio deepfakes in political contexts.

Subscribe to our newsletter and follow us on X/Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.