The rise of real-time deepfakes presents a new and transformative threat in fraud. A deepfake is a digitally altered video or voice designed to represent someone else, typically used maliciously or to spread false information. However, with advancements in technology, criminals are no longer limited to pre-recorded manipulations. They can now deploy deepfakes in live interactions, creating a seamless illusion that challenges financial institutions’ security protocols.
Real-time deepfakes can mimic voices and appearances, enabling fraudsters to deceive during live interactions. This shift from static to dynamic deception poses heightened risks for industries that rely on remote verification, such as banking and financial services. Originally seen as a novelty primarily used for entertainment and disinformation, deepfakes have now evolved. Real-time deepfakes—where live audio or video is manipulated—are creating new opportunities for financial crime.
Organized Crime and Deepfake Technology: A Global Threat
Organized crime has integrated AI-driven deepfakes into its operations. The UNODC warns of a global trend in cyber-enabled fraud where criminals use deepfakes for money laundering, disinformation, and even trafficking (1). These agile networks exploit weaknesses in the financial system. Deepfakes are especially dangerous because they bypass traditional safeguards like facial and voice recognition. Criminals create realistic fake personas that undermine security protocols and erode trust in financial institutions.
UNODC reports that deepfake-related fraud on criminal forums increased by 600% between February and July 2024, as more criminals turned to AI tools to enhance their schemes (1). This dramatic surge indicates that the use of deepfakes in fraud is not just a passing trend—it’s becoming a fixture of organized crime.
Organized crime syndicates have quickly adopted this technology, turning face-swapping from harmless fun into a sophisticated fraud tool. Globally, from Southeast Asia to Europe, criminals are deploying deepfake videos, images, and voices to deceive individuals and organizations, with tens of millions in losses. As deepfake technology advances, the risks continue to escalate.
Regional Focus: Deepfake Adoption in Southeast Asia
While deepfake-enabled fraud is a growing global issue, Southeast Asia has emerged as a particularly active hub for these activities. The region's expanding digital economy and regulatory gaps have made it a fertile ground for organized crime syndicates to exploit deepfake technology on an international scale The UNODC reported a 1,530% rise in deepfake-related fraud across the Asia-Pacific in 2022-2023, as syndicates use AI to bypass identity verification systems. For example, in October 2024, Hong Kong police arrested 27 members of a syndicate that defrauded victims of $46 million using real-time deepfakes in romance scams (1,2).
But the Hong Kong case is just one instance in a larger trend. Deepfakes are now integral to organized crime, where cyber-enabled fraud is rapidly becoming the weapon of choice.
High-Profile Cases Illustrating Deepfake Fraud
Several high-profile cases reveal that even sophisticated companies are vulnerable to deepfake fraud. At a British engineering firm's Hong Kong office, an employee was deceived into approving a $25 million transaction during a video call with a deepfake impersonating the CFO (3). This incident showcases the dangerous potential of AI manipulation, where fraudsters exploit both technology and human trust. It underscores how inadequate traditional security measures, such as video or voice verification, have become in the face of these advanced threats.
Palo Alto Networks investigated a campaign promoting the investment scam "Quantum AI" and uncovered several other deepfake scams by the same threat group, each targeting different audiences using public figures like Elon Musk, Tucker Carlson, and Singaporean leaders. These scams often start with legitimate videos modified using AI-generated audio and lip-syncing technology. Victims are lured through social media ads or fake news to scam sites, asked to invest around $250, and directed to use a fake app showing small profits to build trust. Scammers then persuade them to deposit more money, but when they try to withdraw, they're blocked with excuses like withdrawal fees or tax issues, ultimately losing most of their funds (4).
Deepfakes are enhancing old scams like grandparent fraud, where criminals impersonate loved ones in distress. AI-generated voices have made these schemes more convincing, as seen in a Florida case where an 86-year-old woman lost $16,000 to a caller posing as her grandson needing bail money. AI voice cloning allows fraudsters to replicate human voices with frightening accuracy, making it harder for victims to differentiate real from fake. This technology has turned an age-old scam into a highly effective form of financial exploitation, especially against the elderly (5).
Weaknesses in Financial Institutions’ Defenses
These high-profile cases not only demonstrate the audacity of criminals leveraging deepfakes but also expose critical vulnerabilities in the systems that institutions rely on to safeguard assets. The increasing sophistication of AI-driven fraud underscores the weaknesses in biometric security measures, particularly in the banking sector (6).
AI voice cloning has made banks using voice recognition for verification particularly vulnerable, allowing criminals to bypass security and access accounts, using AI clones voices to fool security and gain account access. This reliance on biometric security presents a critical challenge (7).
A recent data breach in Australia, where over a million records including facial recognition data were stolen, underscores the urgent need to rethink security protocols to counter AI-driven fraud (8).
Mitigating Deepfake Fraud
To combat real-time deepfakes, various AI-driven detection systems and approaches have been developed to complicate fraud attempts. Some of the most common solutions include:
Detection Algorithms: AI tools detect deepfakes by analyzing inconsistencies in voice modulation, facial movements, and pixel-level anomalies. These tools are integrated into video conferencing and phone systems to flag suspicious activity.
Liveness Detection: Verifies a live human presence by identifying subtle micro-movements and real-time interaction cues that deepfakes struggle to replicate.
While these solutions show promise, they must be continually improved to keep pace with evolving threats. A collaborative, industry-wide approach is also essential to effectively combat deepfake fraud. (9).
Collaborative Solutions to Combat Deepfake Fraud
Though the threat of deepfakes is daunting, there are solutions that financial institutions and governments can adopt to mitigate the risks. However, the solutions are not simple, and the responsibility is spread across multiple stakeholders.
Legislation and Policies: Governments around the world are beginning to wake up to the threat of deepfakes. Countries like the United States and platforms like Meta and YouTube have started enacting policies to regulate synthetic media and manipulated content. However, legislation is slow-moving, and deepfake technology is evolving faster than the laws designed to curb its misuse. Social media platforms must intensify their efforts to eliminate deceptive deepfakes from advertisements and other content.
AI Detection Tools: Financial institutions have been investing in AI tools to detect deepfakes in real-time. These tools analyze digital content for signs of manipulation and are continually evolving to keep up with shifts in the marketplace. Banks and businesses must pair these tools with human oversight to ensure deepfakes don’t slip through the cracks.
Proving Authenticity Cryptography: Digital forensics, and blockchain technology can play a role in ensuring that digital content is authentic and verifiable. Financial institutions need to adopt these technologies on a wider scale to protect their operations from deepfake-driven fraud.
Public Awareness: Raising awareness among customers is crucial. Banks must engage in frequent communication, educating their customers about the risks posed by deepfakes and providing guidelines on how to verify suspicious calls, emails, or requests for information. Push notifications, webinars, and campaigns are all ways banks can foster greater vigilance among their clientele.
Collaboration between Financial Institutions: Banks would need to work together, sharing insights and strategies to combat the growing threat of deepfakes. Since the threat of fraud spans across industries and borders, a collaborative approach will help ensure that no institution is left fighting this battle alone (10, 11).
Only by implementing a multifaceted, collaborative approach can stakeholders hope to contain the growing threat posed by deepfakes.
A Unified Approach to Combat AI-Driven Fraud
AI and deepfake technology are reshaping financial fraud, pushing governments, institutions, and consumers to adopt proactive security measures. With Deloitte estimating that deepfake-related fraud could cost up to $40 billion by 2027(10), collaboration across industries is essential to counter these evolving threats and regain the upper hand. The race to outpace AI-driven fraud is far from over and the future of financial security lies in staying one step ahead.
Reference Articles:
Transnational Organized Crime and the Convergence of Cyber-Enabled Fraud, Underground Banking and Technological Innovation in Southeast Asia: A Shifting Threat Landscape | United Nations Office on Drugs and Crime, October 2024. https://www.unodc.org/roseap/uploads/documents/Publications/2024/TOC_Convergence_Report_2024.pdf.
“Hong Kong Police Bust Fraud Ring That Used Face-Swapping Tech for Romance Scams.” Cyber Security News | The Record, October 15, 2024. https://therecord.media/hong-kong-police-bust-romance-scammers-face-swapping-deepfakes?web_view=true.
Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ | CNN, Feb 4, 2024. https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
Farooqi, Lucas Hu, Nabeel Mohamed, Billy Melicher, Alex Starov, Howard Tong, Kyle Wilhoit, Shehroze. “The Emerging Dynamics of Deepfake Scam Campaigns on the Web.” Unit 42 (blog), PaloAlto Networks, August 29, 2024. https://unit42.paloaltonetworks.com/dynamics-of-deepfake-scams/.
'Wiped out her savings!' West Palm Beach resident loses $16K in scam | CW34, Nov 21, 2022. https://cw34.com/news/local/grandparents-scam-west-palm-beach-elderly-state-attorney-dave-aronberg-eric-lieberman-november-21-2022
Challenges in Voice Biometrics: Vulnerabilities in the Age of Deepfakes.” ABA Banking Journal (blog), February 15, 2024. https://bankingjournal.aba.com/2024/02/challenges-in-voice-biometrics-vulnerabilities-in-the-age-of-deepfakes/
How Deepfakes Threaten Remote Identity Verification Systems | iProov, Jan 11, 2024. https://www.iproov.com/blog/deepfakes-threaten-remote-identity-verification-systems
A Face Recognition Firm That Scans Faces for Bars Got Hacked—and That’s Just the Start. Wired. May 2, 2024. https://www.wired.com/story/outabox-facial-recognition-breach
Innov8tif. “How Does Liveness Detection Counter Deepfake Attacks?” https://innov8tif.com/how-does-liveness-detection-detect-and-prevent-deepfake-attacks/.
Deloitte Insights. “Generative AI Is Expected to Magnify the Risk of Deepfakes and Other Fraud in Banking.” | Deloitte Center for Financial Services, May 29, 2024. https://www2.deloitte.com/us/en/insights/industry/financial-services/financial-services-industry-predictions/2024/deepfake-banking-fraud-risk-on-the-rise.html.
Increasing Threat of Deepfake Identities | Homeland Security. https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
Meet The Author: Kashif Ghani
Follow Kashif on LinkedIn at: https://www.linkedin.com/in/kashifghani/
Comments