SOCRadar® Cyber Intelligence Inc. | Top 10 Deepfake Scams Impacting Industries Worldwide
Home

Resources

Blog
Jul 02, 2024
7 Mins Read

Top 10 Deepfake Scams Impacting Industries Worldwide

Deepfake technology has emerged as an effective tool for both creative and malicious purposes. This technology employs AI to generate highly realistic fake videos, audios, and images that are frequently indistinguishable from genuine media.

While deepfakes can be entertaining, they also pose significant risks, allowing deepfake scams to take advantage of individuals’ and organizations’ trust and vulnerabilities.

Ai illustration of deepfake scams

Ai illustration of deepfake scams

Deepfake technology is becoming more accessible, allowing criminals to commit fraud on a large scale. Understanding the methods and consequences of these scams allows you to implement stronger security measures and advocate for enhanced security measures.

This article delves into the top ten deepfake scams that have rocked various industries, demonstrating the diverse and alarming ways in which this technology is misused.

1. ARUP Loses $25 Million in Sophisticated Deepfake Scam

In February 2024, ARUP, a British design and engineering firm, lost $25 million due to a deepfake scam. Fraudsters used artificial intelligence to create a convincing audio imitation of ARUP’s CFO, tricking an employee into transferring funds to a fraudulent account. The funds were quickly transferred offshore, making recovery difficult.

ARUP Headquarters – Source ESG News

ARUP Headquarters – Source ESG News

In response to the incident, the company is strengthening its security measures, by improving a multifactor authentication and AI-based anomaly detection. If you want to learn more about the incident and deepfakes, check out our blog post “CISO Guide to Deepfake Scams“.

2. MrBeast and BBC Presenters Used in Deepfake Scams

In late 2023, deepfake scams exploited the likenesses of popular figures like YouTuber MrBeast and BBC presenters Matthew Amroliwala and Sally Bundock.

Screenshot from Mr Beast’s iPhone 15 deepfaked video

Screenshot from Mr Beast’s iPhone 15 deepfaked video

A TikTok video falsely showed MrBeast offering iPhones for $2, while videos on Facebook used Amroliwala and Bundock to promote a fraudulent investment opportunity involving Elon Musk. These incidents illustrate the increasing sophistication of deepfake technology in perpetrating scams and the need for vigilance and robust content verification measures on social media platforms.

3. WPP CEO Impersonated in Elaborate Deepfake Scam

In May 2024, fraudsters attempted to scam WPP, the world’s largest advertising firm, by impersonating its CEO, Mark Read, using a deepfake voice clone.

CEO of WPP Mark Read LinkedIn page

CEO of WPP Mark Read LinkedIn page

The attackers set up a fake WhatsApp account with Read’s image and organized a Microsoft Teams meeting, during which they used AI-generated audio and video footage to mimic Read and another executive. They tried to deceive an agency leader into setting up a new business and soliciting funds and personal details. Fortunately, vigilant employees thwarted the scam before any damage was done.

4. Ukrainian Influencer Targeted by Deepfake Disinformation Campaign

Olga Loiek Youtube channel

Olga Loiek Youtube channel

In early 2024, Ukrainian influencer Olga Loiek discovered numerous AI-generated deepfakes of herself speaking Chinese on social media. These deepfakes, which falsely portrayed her endorsing closer ties between Russia and China, were likely part of a disinformation campaign. The videos spread across platforms like Xiaohongshu and Bilibili, amassing significant viewership, with one account attracting 300,000 followers. This incident underscores the growing misuse of AI to create convincing yet false media for spreading propaganda and manipulating public opinion.

5. YouTube Deepfake and AI Crypto Scams Steal $600K

AI Illustration of crypto scam in YouTube

AI Illustration of crypto scam in YouTube

In early 2024, cybercriminals used AI-generated deepfakes of public figures and celebrities to perpetrate “Double Your Crypto” scams on YouTube, stealing over $600,000. Hackers hijacked popular YouTube channels, replaced their content with fraudulent streams, and used deepfakes of figures like Elon Musk and Michael Saylor to lure victims. These scams, revealed by Bitdefender, highlight the increasing sophistication of AI-enabled fraud and the urgent need for enhanced cybersecurity measures to combat these evolving threats.

6. Deepfake Ads of Rishi Sunak Used in Facebook Scam

In February 2024, over 100 deepfake video advertisements impersonating UK Prime Minister Rishi Sunak appeared on Facebook, reaching up to 400,000 people. These ads, originating from 23 countries, falsely claimed Sunak was involved in a financial scandal and promoted a scam investment platform supposedly endorsed by Elon Musk.

UK Prime Ministe Rishi Sunak

UK Prime Ministe Rishi Sunak

The ads manipulated footage from BBC News and cost over £12,929 to promote. misleading viewers with faked endorsements and financial opportunities.

7. Candidate Targeted Via Deepfake in Slovakian Election

In February 2024, days before a crucial election in Slovakia, a deepfake audio recording surfaced of a leading candidate allegedly admitting to rigging the election. Another fake recording had him talking about raising beer prices.

The recordings went viral, significantly damaging his campaign, and resulting in his defeat to a pro-Russian candidate. This incident exemplifies the growing use of AI to create deepfakes for political manipulation.

8. CTV Ottawa Deepfake Scam

In January 2024, scammers used deepfake AI to manipulate a CTV Ottawa news story, turning it into a fraudulent video. The original story warned about an online scam, but the deepfake twisted it into a promotion for a fake financial independence program.

CTV Ottawa logo

CTV Ottawa logo

The altered video featured CTV’s Graham Richardson and Patricia Boal, along with the victims of the scam. This incident underscores the growing threat of deepfake technology in spreading misinformation and highlights the need for robust AI regulations to combat such fraud.

9. Deepfake Robocall Targets New Hampshire Primary

In February 2024, an AI-generated robocall featuring a synthetic voice of President Joe Biden was used to disrupt the New Hampshire primary elections. The deepfake urged voters to abstain from voting, falsely claiming their votes wouldn’t be counted. The high-quality audio caused confusion and mistrust among voters.

President Joe Biden

President Joe Biden

The Federal Communications Commission (FCC) launched an investigation and emphasized the need for stringent AI regulations and advanced countermeasures to protect electoral integrity.

10. Indonesian Election Manipulation

In the run-up to Indonesia’s presidential and parliamentary elections on February 14, 2024, artificial intelligence has been both a blessing and a curse. Political campaigns have used AI to create engaging content and interactive experiences, such as chatbots and AI-generated images, in order to attract votes.

However, artificial intelligence has fueled disinformation, with deepfake videos and audio clips misrepresenting candidates and spreading false narratives.