Reference Number ACG-CSB 031725384

The following information was obtained from different cyber security sources for notification to all parties concerned pursuant to the mandate of the Philippine National Police Anti-Cybercrime Group (PNP ACG) and classified as “Restricted” pursuant to the PNP Regulation 200-012 on Document Security and Impact Rating as high based on PNP Information Communication Technology (ICT) Security Manual s.2010-01 p. 22 and p.129.

SUMMARY

Artificial Intelligence-voice scam is a sophisticated scam where criminals use artificial intelligence to replicate human voices, often impersonating trusted individuals or organizations, to deceive victims into revealing sensitive information or sending money. This type of AI-powered fraud, known as voice cloning, highlights a new and dangerous dimension of deepfake technology, one that threatens not only individuals but also businesses. Understanding how these scams operate and learning how to protect yourself are critical steps in mitigating this growing threat. The urgency of this understanding cannot be overstated.

AI voice cloning technology has made remarkable advances in the last few years, reaching the ability to create realistic-sounding audio from just a few seconds of a sample. Although this has many positive applications such as audiobooks, marketing materials, and more, the technology can also be exploited for elaborate scams, fraud, and other harmful applications. 

The potential implications of voice cloning are far-reaching and could have a profound impact on our society. Scammers employ various tactics to execute AI voice cloning scams. They may impersonate a loved one, such as a family member or a superior at work, and fabricate urgent situations to coerce victims into sending money immediately. These scams often play on emotions, instilling fear and panic in their targets to bypass rational thinking. Imagine receiving a voicemail from your boss instructing you to wire a substantial sum of money for a purported urgent project. Believing it to be genuine, you comply, only to realize later that the message was a cleverly crafted fake.

Scammers can collect your voice recording in a couple of ways. One way is using AI to crawl the internet for recordings of you. The second way is by calling you and recording what you say after you’ve picked up the phone. Even as little as you saying “Hello? Who is this?” is enough for them to capture your voice and impersonate you. This is one reason why you should always steer clear of picking up calls from unknown numbers.

Once they have a recording of your voice, they will use the AI tool to speak with your friends or family, asking for things like money or sensitive information that they can use to log into your online accounts. They will typically cause alarm by insinuating you are in some trouble and need the money urgently. Because the victim thinks they are hearing the voice of a family member in distress, they are more likely to send whatever funds are necessary without questioning the situation.

With dangerous trends in adjacent technologies and phenomena like social media like Instagram, TikTok, and X, as well as the rise of generative AI tools, adversaries are wreaking havoc via voice AI scams. Scam calls involve manipulated audio clips that may arrive from known and unknown numbers.

AI voice scams have already made global headlines, causing both emotional and financial distress to countless victims. These scams involve a form of generative AI, that mimics voices from interactions over phone.

As technology continues to advance, we must adapt quickly to protect ourselves from the changing environment. The use of AI and the benefits that stem from it can be useful for businesses and people to enhance our daily routines. However, with the benefits of AI there will always be the opposite. As this technology continues to evolve, we must continue to stay at the forefront and not become victims to these schemes.

Voice cloning attacks represent a new frontier in cybercrime. With vigilance and preparedness, it is possible to mitigate the risks and protect yourself and your loved ones. Staying protected against AI-powered cybercrime requires a combination of technological solutions and vigilant practices. By implementing strong security measures and staying informed about emerging threats, we can better protect ourselves in our increasingly digital world.

RECOMMENDATION

The public is advised to follow these tips to avoid being a victim of fake HTTPS website attacks:

  • Do not pick up phone calls from unknown numbers;
  • Refrain from posting public recordings of yourself speaking on the internet;
  • Do Not Speak First to Unknown Numbers;
  • If you’ve received a suspicious call that sounded like someone you know, call or text their trusted number to confirm you actually had a conversation with them; and
  • Be cautious of unsolicited calls requesting personal information and do not transfer money through unconventional methods.

For additional information, please refer to the following websites:

  • https://www.zdnet.com/article/ai-voice-cloning-tools-arent-safe-from-scammers-consumer-reports-finds/
  • https://www.websterfirst.com/blog/ai-voice-cloning-scams/
  • https://www.quora.com/Why-is-AI-voice-cloning-outpacing-the-law

POINT OF CONTACT

Please contact PLTCOL JERRY V EMPIZO, Officer-In-Charge, Cyber Security Unit, thru e-mail address csradacgroup@gmail.com or contact us by telephone number (632) 723-0401 local 7488 for any inquiries related to this CYBER SECURITY BULLETIN.

Views: 0