Arrow back

What are deepfake AI scams?

20 July, 2023

In recent years, artificial intelligence (AI) has advanced remarkably to the point where it can answer in a human-like manner, support advanced search functions, and even create astonishingly realistic media.

That latter advancement includes videos designed to deceive viewers into believing something that never actually happened.

These deceptive videos are commonly known as deepfakes, and they have become a significant concern due to their potential to cause harm, spread misinformation, and facilitate various scams.

In this blog post, we will explore what deepfakes are, how they are created, their applications, and most importantly, how organisations can detect and prevent employees from falling victim to them.

So without further ado, let’s get to it.

What are deepfakes?

Deepfakes are a form of manipulated media generated by AI using deep learning algorithms.

These algorithms train themselves on extensive datasets to create convincing fake content of real people, such as videos and audio, portraying them saying or doing things they never did in reality.

Unlike simple manipulations like photoshopping or CGI, deepfakes involve minimal human input.

Users only decide whether to accept or reject the AI-generated content after it is created.

This sets deepfakes apart from "shallowfakes," which are AI-generated images combined with misleading information but still require human control during the entire process.

The most common method for creating deepfakes involves deep neural networks and face-swapping techniques.

A target video is chosen as the base, and a collection of video clips featuring the person to be inserted are used.

These clips can be unrelated, such as a Hollywood movie scene and random YouTube videos.

Deep learning algorithms then map the features of the person in the clips onto the target video, making the deepfake appear authentic.

Example of a deepfake scam

A recent troubling incident involved a deepfake video exploiting founder Martin Lewis.

In this fraudulent video, AI was used to mimic both his facial expressions and voice, falsely promoting an app supposedly linked to Elon Musk, the owner of Tesla and Twitter.

The video depicted what seemed to be Martin sitting in his office discussing an investment opportunity named 'Quantum AI,' misleadingly labelled as 'Elon Musk's new project.'

The imitation was strikingly convincing, as the computer-generated version flawlessly replicated Martin's voice, intensifying the deception. The scam even included branding similar to ITV's This Morning, a show Martin frequently appears on.

Scary, right?

How to detect deepfakes

As deepfake technology evolves, detecting them becomes more challenging. However, there are some indicators to look out for:

  1. Blurry details: Deepfakes may have blurry skin, hair, or faces that seem less detailed than their surroundings.
  2. Unnatural lighting: The lighting in deepfakes might not match the lighting of the target video.
  3. Mismatched audio: The audio in deepfakes may not sync perfectly with the person's movements.
  4. Source reliability: Verify the credibility of the source and consider performing reverse image searches to validate the content's authenticity. Don’t take action the first time you see something.

Protect your organisation with truly effective training

Join the thousands who've discovered how Bob's Business' security and compliance awareness training reduces risk, demonstrates improvement and builds cultures.

How to prevent deepfakes

The responsibility of detecting deepfakes should not solely fall on individuals.

Organisations like yours can take proactive measures to combat deepfake scams:

Development of detection technology

Tech companies should invest in developing invisible watermarks or digital fingerprints that signal the source of the image or video.

AI-powered detection platforms

Utilise AI-powered detection platforms like Sensity, which alerts users when they encounter AI-generated media with telltale fingerprints. Be aware, however, that AI detection platforms are in their infancy and cannot be fully trusted.

Two-way verification for financial transactions

Implement a robust two-way verification process for financial transactions.

Require a phone call or face-to-face confirmation for significant transactions, especially those involving fund transfers or sensitive financial information.

Invest in education and awareness

In the long term, the most effective approach to combat deepfake scams involves education, awareness, and fostering a critical mindset among the public.

People should be encouraged to verify sources, seek corroborating evidence from reliable sources, and refrain from jumping to conclusions based solely on images or videos.

How Bob’s Business can help your organisation protect against deepfakes and generative AI

At Bob’s Business, we’re always on the front foot when it comes to emerging cybersecurity risks. That’s why we’ve built a brand new AI Safety module to give your employees a comprehensive understanding of modern AI systems and how they function.

From recognising potentially insecure AI interactions to grasping the benefits and potential risks of tools like chatbots, our course will empower your team to confidently navigate the world of AI.

By the end of the course, participants will be able to identify how AI tools function, exercise caution in AI applications and be well-versed in real-life AI threats.

Embrace the future with confidence and let Bob's Business be your trusted partner in understanding and mitigating the risks of AI.

Back to resources

Ready to build your cybersecurity culture?

Whether you’re looking for complete culture change, phishing simulations or compliance training, we have solutions that are tailor-made to fit for your organisation.

Girl with laptop
Boy with laptop
man and woman with laptops
Global Cyber Alliance