FBI warns scammers are using AI voices to impersonate US officials

The FBI issued a warning about an ongoing campaign using artificial intelligence (AI) to impersonate senior U.S. officials through voice and text messages. The scheme involves “smishing” and “vishing” attacks, text and voice-based phishing methods, designed to trick targets into giving up personal information or account access.
According to the FBI’s public service announcement, attackers are sending AI-generated voice messages claiming to come from high-ranking government figures. The goal is to build trust with the recipient before requesting a move to another messaging platform, where they could send malicious links or request sensitive data.
The warning comes as concerns over AI impersonation grow beyond cyberscams. In 2024, the FCC fined a political consultant $6 million for sending AI-generated robocalls that mimicked President Joe Biden’s voice ahead of the New Hampshire primary.
Who are the targets?
The FBI said the campaign primarily targets current and former U.S. federal or state government officials, as well as their contacts. Once attackers gain access to a victim’s information, they could use it to impersonate additional officials or acquaintances and expand their reach. The bureau warned that contact details obtained through these scams could be used to impersonate others and get money or personally identifiable information.
Why is this a growing threat?
AI-generated audio has advanced to the point where voice clones are often indistinguishable from real human speech. Malicious actors can create convincing voice messages that mimic public figures with only a few seconds of audio.
The tools used to build those voice clones are more available than ever. A report from venture capital firm Andreessen Horowitz said the global AI voice market reached $5.4 billion in 2024 and continues to grow, driven by consumer adoption and advances in language processing. By 2026, that market is projected to reach nearly $9 billion.
Those tools now power everything from customer service bots to hands-free driving assistants.
How can people protect themselves?
The FBI advised individuals to verify the identity of anyone requesting sensitive information, especially if the request comes via a new phone number or an unfamiliar platform. Officials recommend checking for misspellings in contact details, examining URLs for irregularities, and listening closely for unnatural voice patterns.
If in doubt, the FBI urged people to contact their organization’s security office or a local FBI field office. Victims must also file reports with the Internet Crime Complaint Center at www.ic3.gov.
“AI-generated content has advanced to the point that it is often difficult to identify,” the FBI said. “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”