Interior portrait of handsome man in vaporwave style

AI scams have become a growing threat to consumers worldwide. With generative AI becoming widely available to the general public, online fraudsters are now able to produce incredibly convincing fake content that can deceive even the most cautious individuals. Needless to say, this represents an unprecedented challenge for digital safety. Famously, in the 2024 election cycle, a bad actor in New Hampshire used artificial intelligence to impersonate President Biden on phone calls ahead of the state’s primary elections. Unfortunately, that incident may only be the beginning of many more to come.

These impersonations, known as “deepfakes,” are particularly dangerous in the fight against misinformation. These digital, fictitious creations use advanced machine learning algorithms to create hyper-realistic video, audio, and image content that appears to be made by real people. Unlike traditional forms of digital fraud, deepfakes can replicate voices, facial expressions, and entire personas with shocking accuracy. With online scammers using this technology in droves, we at StreamSafely are here to help you protect yourself by teaching you how to identify these scams and avoid falling for them in the first place.

Real-World Examples of AI and Deepfake Scams

While thought of as theoretical only a short time ago, deepfake technology has quickly transformed into a very real threat across America and throughout the world. Cybercriminals have discovered new and alarming ways to exploit AI-generated content for their fraudulent purposes.

Compromised Business Email and Corporate Deepfakes

In May 2024, the British engineering firm Arup — whose founder designed the Sydney Opera House — was hit by an AI-assisted phishing attack that ultimately lost the company $25 million.

The original contact was a phishing email that arrived in the inbox of a worker at the company’s Hong Kong offices. Although the employee believed the correspondence to be fake, the fraudsters had him join a video call with people he believed to be the company’s chief financial officer and other company associates. The call seemed real, and the employee moved forward with the transfers that the email requested. However, all of the people on the other end of the video ended up being AI-generated deepfakes created by the scammers.

Fake Calls and Voice Cloning Scams

Around the same time, fraudsters impersonated Mark Read, the CEO of the world’s largest advertising company WPP. Although this time the attack was not successful, the scammers used a fake WhatsApp account and photos of Read to create a fake Microsoft Teams profile to engage another WPP executive. During the call, the fraudsters used YouTube footage of Read and AI voice cloning technology to mimic his voice, allowing the fraudsters to contrive or program conversations that would be parroted in Read’s voice to the other executive on the call.

As mentioned previously, a man in New Hampshire used similar technology to clone President Biden’s voice ahead of the 2024 presidential election. The perpetrator is now facing a $6 million fine.

Scam Endorsements and Deepfake Celebrities

While voice impersonations and AI-generated video are more complex, simple AI tools can create or doctor images of celebrities and influencers which can spread on social media and wreak havoc in the real world. In August 2024, fake images of “Swifties for Trump” circulated online, alleging that fans of the pop superstar Taylor Swift supported the Republican candidate, Donald J. Trump in the 2024 presidential election. The images were so convincing that they were shared by Trump himself, and Swift, habitually quiet on the topic of politics, set the record straight and formally endorsed Kamala Harris.

The celebrity endorsements, however, don’t have to be political. A fake AI-powered representation of tech mogul Elon Musk has appeared in thousands of malicious ads which has contributed to billions of dollars of fraud.

Romance Scams Using AI-Generated People

Dating platforms and social media are now plagued with entirely fictional personas created using AI. Men across Asia lost a collective $46 million to romance scams that used deepfakes and AI to gain trust and spark affection. Similarly, in Spain, two women lost $362,000 after fraudsters posed as Brad Pitt and his associates, wooing them into investments and fake relationships.

How AI and Deepfakes Are Changing Online Fraud

With the integration of AI into fraudulent activities, cybercriminals are fundamentally changing their approach to online scams. Traditional fraud methods relied on the manual creation of fake content, often limited by skill and time constraints. However, AI has removed these obstacles, creating a reality where sophisticated scams can be generated without talent or effort and with unprecedented speed.

Hyper-Realism in Scams

Advanced machine learning algorithms can now generate content that is so realistic that traditional verification methods are becoming ineffective. Just a few months ago, AI images were raising red flags by portraying subjects with too many fingers or teeth, but these discrepancies are becoming fewer and farther between. Even in AI-generated videos, facial movements, vocal patterns, and background details are all crafted with unprecedented precision.

Scalability

Unlike traditional fraud methods, which require a certain amount of manpower, AI-powered scams can be produced rapidly and distributed across multiple platforms simultaneously. There’s no longer a need for call centers or coordination. A single fraudster can now generate hundreds of convincing fake profiles and scenarios instantaneously.

Targeting Vulnerabilities

Cybercriminals can now manipulate their victims like never before by using AI technology to amass and exploit personal information. By analyzing vast amounts of personal data from social media, online interactions, and public records, AI-powered systems can learn everything there is to know about an individual or group. With all that information on hand, scammers can pinpoint emotional triggers, financial pressures, and personal weaknesses and use these vulnerabilities to their advantage.

Warning Signs: How To Spot Deepfakes and AI-Driven Scams

Recognizing AI-generated content requires careful observation and a level head. Remember, scammers take advantage of their victims’ emotions, so one must first separate themselves from the situation and look at things objectively. The following warning signs can help you identify deepfake content and protect yourself from sophisticated online scams:

  • Unusual Facial Expressions: AI-generated images and videos often struggle to create perfectly natural human expressions. Deepfake technologies frequently produce slight irregularities in facial muscle movements that appear subtly mechanical or unrealistic.
  • Unnatural Body Movements: AI content may have robotic or slightly jarring body movements. These movements often lack the fluidity and nuance of natural human motion and may move more like renderings from old video games.
  • Audio-Visual Inconsistencies: A misalignment between the words being spoken and the movements of the mouth can mean the content is fake. Deepfake technologies sometimes create noticeable discrepancies between audio tracks and visual elements, so be especially wary if a video or stream looks like it’s lagging or dubbed.
  • Lighting or Skin Imperfections: Artificial content often fails to replicate exactly how light and shadows interact with a subject. AI-generated images may display subtle irregularities in how light falls across the skin or leave it looking too smooth and lacking normal imperfections.
  • Unrealistic Offers: Remember that if it seems too good to be true, it probably is. Scams generated by AI are built around scenarios that seem perfect or extraordinarily unlikely, such as great investment opportunities or love at first sight.
  • Urgency and Pressure Tactics: Scammers rely on their victims not thinking clearly, and one of the best ways for them to do that is to create a sense of urgency. This pressure tactic is meant to elicit a reflexive response that the victim may not have made under calmer circumstances. If you feel pressured or worked up, take a step back and reevaluate.

How To Protect Yourself Against AI and Deepfake Scams

The question of online security has become increasingly complex with the rise of AI-powered scams. Cybercriminals are now equipped with easily accessible tools that can create virtually any type of fraudulent content, making many traditional fail-safes obsolete.

Modern consumers face an extreme level of risk, both online and off. The combination of hyper-realistic content generation, massive data analysis, and targeted manipulation techniques means that individuals must become their own first line of defense against these emerging technological threats. We at StreamSafely make it our mission to ensure that you’re as protected as possible and know how to recognize scams before they do any damage.

Here’s how to keep yourself protected against scams using AI and deepfake technology:

Verify Video or Voice Calls With Secondary Channels

In the past, scammers have focused on written communication like emails or texts where they can’t be seen and their voices can’t be heard. Nowadays, AI allows them to craft convincing personas that you can see and interact with, including replicas of family members and loved ones.

If you ever receive an unexpected request — be it through video calls, voice calls, or old-school written methods — confirm who you’re speaking to by disengaging and reaching back out to the person or entity they claim to be on a different platform you trust. Never reengage with the original number they used to make contact, even when you change platforms. Look up official or trusted contact information and use that instead.

Stay Skeptical of Online Personas and Ads

Social media platforms and digital advertising have become breeding grounds for fake profiles that can appear incredibly convincing. These AI-crafted personas often include advertisements and promote products while using machine learning algorithms to create hyper-realistic images, backstories, and interactions. Before engaging with any online persona or advertisement, take the time to independently verify the source, cross-reference information, and look for multiple independent confirmations of the content’s authenticity.

Enable Multifactor Authentication (MFA)

Multifactor authentication (MFA) is an important defense against AI-powered data harvesting and identity theft. By requiring multiple forms of verification — such as a password, a temporary code sent to a mobile device, or biometric confirmation — MFA creates a substantial barrier for fraudsters attempting to access your personal accounts. Multifactor authentication adds layers of security that make it more challenging for AI-driven scam techniques to succeed, even if one layer of protection is compromised.

Use Trusted Websites and Search Engines

Make the commitment to only use reputable online platforms. Many illegitimate sites, including known piracy websites, often operate by stealing and selling personal data. Legitimate websites typically display clear contact information, have secure “https://” connections, and provide verifiable business credentials. Be particularly cautious of websites that have unusual domain names, lack professional design, or make extraordinary claims that seem too good to be true.

Stay Updated with Technology News

Continuously educate yourself about emerging digital threats and technological developments in cybersecurity. Following reputable technology news sources, subscribing to cybersecurity newsletters, and watching webinars can provide critical insights into the latest AI scam techniques and prevention strategies. At StreamSafely, we have the latest cybersecurity tips and a video library full of information designed to help you stay safe.

Tools and Resources To Detect Deepfakes

The rapidly evolving reality of AI-generated content necessitates sophisticated detection tools. Our team at StreamSafely has compiled a list of platforms that can help sniff out fake content and stop scams in their tracks:

  • Deepware Scanner: The Turkish company Deepware has created an online scanner that allows you to paste the URL of a video and analyze it for signs of AI manipulation. The tool is especially useful against fake influencers who post video content to YouTube and similar online personas.
  • Microsoft Video Authenticator: Available as part of Microsoft Azure, Video Authenticator was developed to identify manipulated digital content with frame-by-frame precision. The tool was launched in 2020 and works by allowing users to add “digital hashes and certificates” to their own content, which serves as a kind of stamp of authenticity. Suspicious content can then be compared against these hashes to see where and how it was manipulated.
  • Sensity AI: Founded in the Netherlands, Sensity AI is an advanced system that specializes in detecting fake media across multiple digital platforms. The platform has been mentioned in Forbes and Bloomberg and is reportedly supported by the European Commission and NATO Strategic Communications Centre of Excellence.
  • Sentinel AI: Built for democratic governments, defense agencies, and enterprises, Sentinel AI offers a specialized service focusing on deepfake detection. Their technology is reportedly used by the European External Action Service (EEAS).
  • Oz Forensics: Oz Forensics and their service Oz Liveness use facial recognition and authentication software to detect deepfakes and reduce biometric fraud. They scored over 99% accuracy in 2021 by both the University of Massachusetts’ Labeled Faces in the Wild report and a study by the National Institute of Standards and Technology (NIST).

Find More Resources To Protect Yourself With StreamSafely

We at StreamSafely remain committed to providing consumers with the most current information about digital safety. With fraudsters and digital pirates becoming ever more emboldened to lure consumers into scams, it’s important to make sure you’re as informed and protected as possible. Our expert team continuously watches for new and emerging online threats and develops resources to help individuals like you stay out of harm’s way. By being informed, cautious, and proactive, you can reduce your vulnerability to AI-driven scams and deepfakes. Whenever you have doubts or questions, StreamSafely will be here to help!

Previous articleHow Blockchain Technology Could Help Combat Digital Piracy
Next article11 Fraud Prevention Tips & Strategies To Protect Yourself From Online Scammers