FBI Warns of AI Voice Scam: Smishing & Vishing Campaign Targets US Officials

The FBI has released a public service announcement to warn individuals about a growing cyber risk involving audio and video scams

Come 2025 we see malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts.

The FBI is urging the public to remain vigilant and take steps to protect themselves from these schemes.

So let’s understand what exactly is happening?

The FBI has disclosed a coordinated campaign involving smishing and vishing—two cyber techniques used to deceive people into revealing sensitive information or giving unauthorized access to their personal accounts.

  • Smishing involves sending malicious text messages (via SMS or MMS) to lure recipients into clicking a fraudulent link or engaging in conversation.

  • Vishing involves malicious voice messages, often enhanced with AI audio designed to sound like trusted figures, including high-ranking officials.

Scammers aim at building trust with the victims before tricking them into revealing personal data or granting access to sensitive accounts. Once access is gained, the attackers can impersonate the victim to deceive others in their network.

The goal is often to harvest personal information, obtain login credentials, or request money or sensitive info changing the identity.

  • AI-generated voices make it difficult to distinguish between real  and fake voicemails.

  • Attackers use publicly available data, such as photos and job titles, to make their messages more convincing.

  • These tactics take advantage of human trust, making even tech-savvy individuals vulnerable.

The FBI warns that the stolen credentials or information may be used to impersonate more officials, spread disinformation, or commit financial fraud.

FBI Shares Common Signs of a Fake Message

The FBI has shared several tips to help the public identify fake messages or voice calls:

  1. Verify the Sender: Do not trust a message or voice note just because it sounds official. Always look up the contact details from a known and trusted source, and verify the identity through a separate channel.

  2. Examine Details Closely: Look at the phone numbers, URLs, spelling, and message format. Scammers often change a single letter or number to make a message look legitimate.

  3. Check for AI Artifacts: In voice or video messages, watch for subtle flaws like distorted features, weird shadows, unusual voice lag, or strange speech patterns. These could be signs of AI-generated content.

  4. Listen for Tone and Language: Even if the voice sounds familiar, pay attention to word choice or phrases that seem out of character. AI-generated voices might mimic tone but often fail to capture personality or speech quirks accurately.

  5. When in Doubt, Reach Out: If something feels suspicious, contact your organization’s security team or the FBI for verification before taking any action.

Leave a Reply

Your email address will not be published. Required fields are marked *