Generative AI has frequently been misused, from fabricating academic papers to mimicking artists. Now, it appears to have played a role in state-backed influence operations.
A recent report from Massachusetts-based threat intelligence firm Recorded Future highlights a Russian-linked campaign dubbed “Operation Undercut,” which allegedly utilised commercial AI voice generation technology, including tools developed by ElevenLabs, a fast-rising AI startup. The operation aimed to erode European support for Ukraine by distributing fake or misleading “news” videos featuring AI-generated voiceovers.
Undermining Ukraine Through AI
These videos, crafted to sway European audiences, accused Ukrainian politicians of corruption and questioned the value of military aid to Ukraine. One such video claimed that “even jammers can’t save American Abrams tanks,” suggesting that advanced US military equipment was ineffective, thereby casting doubt on the utility of aiding Ukraine’s defence efforts.
Recorded Future’s researchers suggested that the videos “very likely” used AI-generated voiceovers, identifying ElevenLabs’ technology as a key component. Their analysis involved submitting clips to ElevenLabs’ AI Speech Classifier, a tool that detects whether audio has been generated using the company’s software. The classifier confirmed the likelihood of ElevenLabs’ involvement. Despite being singled out, ElevenLabs did not respond to inquiries for comment, and Recorded Future’s report did not name other AI voice tools potentially used in the campaign.
AI Enhancing Credibility
The campaign’s creators inadvertently demonstrated the advantages of AI-generated voices. While some videos featured human voiceovers with noticeable Russian accents, the AI-generated versions were fluent in multiple European languages—including English, French, German, and Polish—and exhibited no discernible foreign accents. This lent a veneer of authenticity to the misleading content.
AI also enabled rapid localisation of the videos, allowing for swift distribution across European languages such as English, German, French, Polish, and Turkish. ElevenLabs’ platform supports all these languages, underscoring the potential for such tools to amplify state-backed influence campaigns.
Ties to a Sanctioned Organisation
The report attributed “Operation Undercut” to the Social Design Agency, a Russia-based group sanctioned by the U.S. government earlier this year. The agency reportedly operated over 60 fake news websites in Europe and used bogus social media accounts to promote their content, all on behalf of the Russian government, according to the U.S. State Department.
Despite the sophisticated tactics, Recorded Future concluded that the campaign had a minimal impact on public opinion in Europe.
A History of Misuse
This is not the first time ElevenLabs’ technology has been implicated in controversial activities. In January 2024, its voice synthesis tools were reportedly used in a robocall impersonating U.S. President Joe Biden, discouraging voters from participating in a primary election. Following this incident, ElevenLabs introduced new safeguards, including the automatic blocking of political figures’ voices.
The company prohibits “unauthorised, harmful, or deceptive impersonation” and claims to enforce these rules through automated systems and human moderation. However, concerns about misuse persist as the technology’s accessibility and capabilities continue to grow.
Rapid Growth Amid Controversy
Founded in 2022, ElevenLabs has seen meteoric growth, increasing its annual recurring revenue (ARR) from $25 million to $80 million in less than a year. The company is reportedly on the verge of being valued at $3 billion, with backing from prominent investors such as Andreessen Horowitz and former GitHub CEO Nat Friedman.
While its achievements are impressive, the company’s tools are increasingly under scrutiny, raising questions about the ethical responsibilities of generative AI developers in curbing misuse. The revelations surrounding “Operation Undercut” underscore the urgent need for robust safeguards as AI voice technology becomes more sophisticated and pervasive.