Skip to content

Digital Deception: Unmasking Deepfakes in Cybercrime

Deepfakes are a growing threat for businesses, with blackmail and extortion being used to control employees, information, and transactions. As technology advances, spotting fake content becomes ever more difficult, leaving organizations vulnerable to AI-enabled scams.

Successfully extorted entities often suffer a loss of face, struggling to regain trust and credibility.

Art_Banner_ORIntSights_Author_Brangman-1

Key Points

+ The use of deepfakes in criminal activities, such as producing fake explicit content to extract sensitive information or large sums of money, is on the rise, and victims often endure prolonged manipulation due to fear of retaliation.

+ Deepfake scammers utilize a variety of methods to conduct their illegal activities, posing a massive threat to the public and private sectors, which may be vastly unprepared to understand or handle these attacks.

+ Businesses are also targeted by deepfake extortion, with high-profile cases across the world highlighting the financial and reputational consequences. The COVID-19 pandemic has provided scammers with opportunities to exploit vulnerabilities in remote communication and transactions.

+ Proactive measures, such as mandatory training to promote cyber threat awareness among staff and recognition of the "uncanny valley" effect, are essential to protect against deepfake-driven extortion. Incorporating a variety of measures to prevent scams is crucial for businesses to minimize the threat of blackmail or extortion to not only themselves, but also their employees and customers.

Unmasking a New Villain

In the ever-evolving landscape of cybersecurity, a sinister innovation has emerged that threatens businesses with a new breed of cybercrime. Deepfakes, once known primarily for their capacity to deceive on a personal level, have transformed into potent weapons of extortion and scamming, posing grave risks to the public and private sectors. As of 2023, deepfakes have been left unregulated–partially due to the speed of their evolution as well as a lack of understanding of how they can affect or harm a variety of industries.

For the past several years, deepfakes have remained relatively benign and have even fulfilled many positive roles. Victims of crime or those who wish to remain anonymous like whistleblowers have been able to use GAN technology to mask their identity. Movies have leveraged deepfake software to digitally de-age actors or complete filming if a cast member dies. Perhaps the most interesting use has been from investigators bringing murder victims back to life, using their face and voice to inspire the public to help solve cases that have gone cold.

Exploitation, the Personal & the Political

As with any invention, deepfakes have been used for malicious purposes. One of the most alarming applications of criminal deepfake exploitation is the production and dissemination of fake, explicit content made with the intent to force an individual to submit to demands, including surrendering sensitive information or personal or corporate funds. This manipulation can take a variety of forms, be it voice impersonation, a fake pre-recorded video of an influential individual, or even a live video using a hyper-realistic, AI-generated filter of another person’s face and body. Regardless of the method, the deception and subsequent blackmail is specifically designed to erode the victim’s resistance and instill a paralyzing fear of exposure. As with many instances of blackmail, the relationship between the perpetrator and victim may continue over weeks, months, or even years due to fear of punishment or retaliation if demands are unmet.

While governments implement stringent precautions at the national level, the 21st century is marked by the unique peril of fake media triggering domestic or international crises.

Beyond individual extortion, the reach of deepfakes extends into the realm of political subversion. High-profile politicians, including figures like Vladimir Putin, Volodymyr Zelenskyy, Kim Jong-un, Donald Trump, and Joe Biden, are popular targets of deepfake impersonations, facilitating the dissemination of disinformation and sowing confusion and dubiety across entire populations. During critical moments such as election cycles, deepfakes generate profound tension and mistrust within the public sphere that leave citizens uncertain about the veracity of political statements, forcing politicians to defend or attempt to disprove claims falsely attributed to them.

Moreover, the potential for deepfakes to incite violence and escalate international conflicts poses a significant threat to national security. In times of geopolitical tension, fabricated videos or speeches threatening violence can inflame hostilities, potentially prompting hasty and unwarranted responses. While governments implement stringent precautions at the national level, the 21st century is marked by the unique peril of fake media triggering domestic or international crises. As deepfakes continue to evolve and spread, addressing the multifaceted challenges they pose to individuals, political stability, and global security remains a paramount concern for the digital age.

Global Extortion

In addition to blackmail, businesses are also suffering the effects of extortion via deepfake. A recent incident in Hong Kong stands out as a harrowing example of the devastating consequences of deepfake-driven deception. In early 2020, a company manager believed he was speaking to his director on the phone when he was ordered to release $35 million to settle a transaction--the "person" on the phone was not a person at all. This extortion, already carrying massive financial implications, shepherds a measure of public embarrassment and damage to an organization's reputation when they fall victim to such schemes. Successfully extorted entities often suffer a loss of face, struggling to regain trust and credibility.

A similar case occurred a year prior in England, wherein a scammer used deepfake software to impersonate the voice of an executive requesting a financial transfer, leading the company to a loss of £220,000. It should be noted that at the time, this scam was hailed as “unusual,” yet within a year it became a widespread, multi-million dollar, internationally recurring pattern.

Scammers have been granted an unprecedented opportunity to practice and perfect their deceptive methods, exploiting vulnerabilities that the pandemic has exposed in remote communication and financial transactions.

The timing of these scams also deserves its due attention. The Covid-19 pandemic has made individuals and businesses rely more heavily on remote interactions. Scammers have been granted an unprecedented opportunity to practice and perfect their deceptive methods, exploiting vulnerabilities that the pandemic has exposed in remote communication and financial transactions.

Another facet to note is the truly global scale of these scams, capable of targeting a single individual or an entire organization. Invariably, they revolve around the exploitation of trust and obedience, manipulating victims into taking actions they would not have considered under ordinary circumstances. The end result is almost always the theft of substantial sums of money.

Perhaps most unsettling is the fact that deepfake scams remain significantly understudied, making it impossible to accurately quantify the extent of their financial impact. As the number of reported cases continues to rise, it is clear that millions of dollars have already been lost or stolen due to these deceitful schemes. Further complicating an already murky situation is the fact that, for the sake of saving face, many institutions may be reticent to admit the extent or success of these crimes.

Looking ahead, the potential for these losses to escalate into the billions by the end of the decade is a painfully realistic possibility. This underscores the urgent need for comprehensive research, awareness, and safeguards to protect individuals and businesses from falling prey to the ever-evolving threat of deepfake-driven extortion. A stark reminder of this threat can be found in the 2022 statistic, with 359,787 fraud reports documenting losses of $1,000 or less, highlighting the pervasive and growing nature of this newfound form of digital crime.

Balancing Practice and Precaution

As technology advances and deepfake sophistication increases, the need for proactive measures has become apparent. Companies must adapt to this evolving AI landscape by incorporating deepfake defense into their training and protection mechanisms. Training for larger companies that are prime targets for financial or information theft should become mandatory, as well as promoting cyber threat awareness among all staff in various workplaces. Companies like Kaspersky have already put forth prefatory suggestions such as looking for “jerky movement, strange blinking, and lips poorly synched with speech” to aid in spotting deepfake videos.

Additionally, developing an intuitive understanding of the "uncanny valley" effect can serve as a vital defense. If something feels wrong or unusual, it should raise suspicion and prompt further investigation. Companies should also consider implementing authentication measures to prevent scams, such as physical meetings to confirm significant business deals, and using 2FA, YubiKeys, or other common security tools in the digital workplace. Squarely confronting the accelerating trend in AI-enabled criminality is crucial to fortify businesses and ensure their resilience in an increasingly complex digital landscape.


About the Author

Arianna Brangman is a Risk Intelligence Analyst at Luminint, alongside her work in the corporate risk analysis sector in London. She specializes in intelligence analysis, geopolitical risk, international security, and cross-cultural exchange, and is especially interested in European and Pacific affairs.

Latest