A recently-released set of early Satoshi emails contains some interesting insights into the network and its creator.
Audio Deepfakes: The New Frontier In Election Disinformation
Audio deepfakes are becoming a key tool in election disinformation, with AI technologies enabling realistic voice clones.
Audio deepfakes are increasingly becoming a tool for spreading disinformation in elections worldwide. Recently, New Hampshire’s attorney-general announced an investigation into a robocall using an artificial voice resembling President Joe Biden, urging voters not to participate in the state's presidential primary. Researchers have identified similar incidents involving synthetic audio in the UK, India, Nigeria, Sudan, Ethiopia, and Slovakia, signaling a growing trend in the use of AI-powered voice-cloning tools.
The Emergence Of Sophisticated AI Voice-Cloning Technologies
The proliferation of advanced AI tools from companies like ElevenLabs, Resemble AI, Respeecher, and Replica Studios has made it easier to create convincing audio deepfakes. Microsoft's VALL-E, capable of cloning a voice from just three seconds of recordings, exemplifies this technological advancement. Henry Ajder, an AI and deepfake expert, emphasizes the increased vulnerability to audio manipulation due to the public's lack of awareness, compared to visual content.
The Impact And Challenges Of Audio Deepfakes
The surge in audio deepfakes poses significant challenges for detecting and regulating such content. High-profile cases like TikTok accounts mimicking news outlets with AI-generated voices and a network uncovered by NewsGuard illustrate the reach of these deepfakes. ElevenLabs, founded by former Google and Palantir employees, offers a range of AI audio generation tools, shifting synthetic audio from robotic to natural tones. The market for text-to-speech tools has expanded, with companies like Voice AI and Replica Studios offering services for various applications. However, the lack of regulation and difficulties in detecting the original source of deepfakes raise concerns about their potential misuse.
The Need For Effective Detection And Regulation
In response to the escalating use of audio deepfakes, companies and platforms are developing technologies to counter disinformation. Microsoft, ElevenLabs, and Resemble are working on detection tools and ethical guidelines, while cybersecurity firms like McAfee have introduced detection technologies like Project Mockingbird. Online platforms such as Meta and TikTok are also investing in labelling and detection capabilities. Policymakers and experts emphasize the urgency of implementing protective measures to address the growing threat of audio deepfakes in political contexts.
Subscribe to our newsletter and follow us on X/Twitter.