Resources
nsknox Blog

Inside the $25M Hong Kong Deepfake Scam: A Comprehensive Analysis

Hong Kong

In February 2024, a Hong Kong-based multinational firm fell victim to a sophisticated deepfake scam, losing $25 million USD.

The attackers employed AI technology to create convincing video deepfakes of the company’s CFO and other executives, which they used during a video conference to deceive an employee into authorizing large fund transfers.

Drawing on nsKnox’s understanding of how deepfake payment fraud attacks are typically executed, here is our perspective on how the $25 million Hong Kong deepfake scam likely transpired. This is based on the steps outlined in an interview with nsKnox’s CISO, Yaron Libman (also featured in this newsletter).

1. Targeting and Reconnaissance:

  • Information Gathering: The attackers likely conducted extensive research on the target organization to identify the key decision-makers, particularly those involved in financial approvals, such as the CFO.
  • Data Collection: Harvesting audio and video footage of the company’s CFO and other executives. This could have been sourced from public interviews, webinars, or social media
    platforms. Such material was critical for training AI models to convincingly replicate their speech patterns and facial expressions.
  • Identifying Vulnerabilities: Through phishing attacks, insider knowledge, or social engineering tactics, the attackers gained information on internal workflows, payment protocols, and approval authority chains.

This survey gave the attackers the necessary data to build convincing deepfake models and craft a legitimate-appearing scenario

2. Deepfake Creation:

With sufficiently collected raw data, the attackers, most likely, employed advanced deepfake AI technology to:
  • Develop Hyper-Realistic Models: Using state-of-the-art algorithms widely available today to most users, they replicated the CFO’s facial features, voice, and micro- expressions, ensuring the deepfake could perform convincingly in real-time video interactions.
  • Enhance Real-Time Adaptability: The deepfake system was probably fine-tuned to respond fluidly during live communication, seamlessly simulating the CFO’s responses to eliminate suspicion.
  • Authenticate the Setup: The attackers likely mirrored legitimate internal video conferencing systems or spoofed official meeting links, further reducing suspicion.

3. The Attack Execution:

The attackers initiated a video call, leveraging the deepfake CFO to instruct the targeted employee with financial authority.

  • Manipulating Trust: During the call, the deepfake executives instructed the employee to transfer funds to specified bank accounts, framing it as urgent and confidential.
  • Creating Urgency: The deepfake ‘CFO’ likely framed the request as an urgent, high-stakes transaction, leveraging their perceived authority to discourage hesitation or secondary verification.
  • Exploiting Trust: The employee, trusting the visual and verbal cues of the supposed CFO, authorized the transfer of $25 million to the fraudulent accounts provided during the call.

This stage relied on exploiting technological and psychological vulnerabilities, such as employees’ inherent trust in senior leadership and reluctance to challenge high level directives.

4. Completion and Concealment

Once the funds were transferred, the attackers likely employed a series of steps to obscure their tracks:
  • Delayed Realization: The fraud was discovered only after the funds had been transferred, and subsequent communications raised suspicions, leaving little room for recourse as, by then, the funds had already been ‘laundered’ and become irretrievable.
  • Layered ‘Money Laundering’: The funds were likely dispersed through multiple mule accounts and shell companies across numerous jurisdictions, making tracing or recovering the funds challenging@
  • Operational Anonymity: The attackers likely operated through anonymized networks, leaving minimal digital footprints that could tie them to the crime.
  • Investigation: Upon investigation, it was revealed that the video conference had been manipulated using deepfake technology, leading to the unauthorized transfer.

How Can Companies Prevent Such Attacks?

In today’s era of synthetic reality, the increasing prevalence of deepfake technology has rendered voice and video verification methods unreliable, making traditional approaches inadequate. Corporations must require a technology-driven solution that can securely and accurately validate bank account details without the need for phone or video callbacks. By adopting a deterministic approach to verifying payee bank account information, businesses can eliminate the need for fraud-prone phone calls and video conferencing.

PaymentKnox™ for Corporates by nsKnox is a comprehensive payment validation platform designed to address the complexities of modern financial fraud. It provides deterministic account validation by cross-referencing transaction details against verified databases and can validate any account anywhere in the world using bank KYC data.

This ensures that payments are routed only to legitimate and pre approved recipients, significantly reducing the risk of fraud.

While video conferencing has become a regular part of our
daily workplace routines, it’s important to be aware of
potential risks.

Below are steps to help your teams protect themselves from external threats during calls and video conferences. However, it’s crucial to stress that these precautions alone are not sufficient when it comes to transferring funds:

  • Employee Training: Regularly educate employees on the risks of deepfake threats and provide them with the tools and training to recognize red flags for signs of fraudulent communications.
    • Questioning unusual requests and verifying them through alternative channels, even when they
 appear to come from senior executives.
    • Double-check any instructions or requests received that bypass standard protocols, such as changes in payment processes, high-pressure demands, or deviations from typical communication channels.
    • Encourage employees to confirm sensitive instructions through established independent channels,
 even if the request appears urgent.

  • Voice and Video Verification Systems: Leverage AI driven tools designed to detect deepfake anomalies. Platforms like Sensity and Microsoft Video Authenticator can identify issues such as unnatural lip-syncing, irregular speech patterns, audio-visual inconsistencies, discrepancies in facial light reflections, or other subtle details indicative of deepfake contet.

Conclusion

The $25 million deepfake scam in Hong Kong highlights how advanced AI-driven fraud can manipulate trust and authority within financial operations.

As deepfake technology continues to evolve, voice and video verification can no longer be relied upon, rendering traditional methods insufficient. To address this, companies must adopt technology-based solutions that verify bank account details without depending on phone or video callbacks.

Additionally, measures such as employee training and the implementation of voice and video authentication systems can help mitigate vulnerabilities associated with deepfake AI.

Related topics

Let’s talk to see if our solution meets your needs

It’s powerful and easy, with no effort from IT and no changes to current processes are required.