OSINT & the Rise of Deep Fakes: “This Person Doesn’t Exist”

OSINT & the Rise of Deep Fakes: “This Person Doesn’t Exist”

Written by Noel Saido

Noel Saido is a pentester by day and a security researcher by night. Passionate about cybersecurity, he enjoys developing offensive tools and sharing his experiences through writing and video content. When not breaking into systems (ethically, of course), he stays active through exercise.

AI | OSINT

May 13, 2025

In OSINT investigations, analysts often encounter faces that yield no results during reverse image searches on platforms like Google, Bing, TinEye, or facial recognition tools like PimEyes. When that happens, there are typically two explanations: either the image is completely new to the internet or an artificially generated face, a deep fake. Increasingly, it’s the latter.

What Are Deep Fakes?

Deep fakes are synthetic images or videos created using artificial intelligence. They often depict well-known individuals doing or saying things they never actually did, especially in the context of adult content or misinformation. In some cases, the entire face is fictional. The AI-generated image of the woman shown below, for example, depicts someone who doesn’t exist.

The growing sophistication of deep fake technology has raised concerns as fabricated videos of politicians, fake LinkedIn profiles, and nonexistent people on dating platforms become more common. These forgeries can mislead voters, destroy personal relationships, damage reputations, and assist scammers in defrauding victims.

As an illustration of the power of deep fake tools, there are well-known videos online showing former U.S. President Barack Obama delivering speeches he never gave, entirely generated by AI.

How Are Deep Fakes Created?

For years, filmmakers have used CGI to craft imaginary scenes and characters. Today, similar technologies are widely accessible at a low cost, and they’re being used for more controversial or harmful purposes.

Face-swapping, one common method, involves feeding an AI system with thousands of images of two individuals, let’s call them Person A and Person B. The AI learns where their facial features overlap, compresses the data, and then reconstructs Person A’s face onto Person B’s expressions and movements.

Another technique, called Generative Adversarial Networks (GANs), is used to produce entirely fake faces. Here, random data is used by a generator to create an image, which is then refined by comparing it against thousands of real human faces. Through repeated cycles, the AI improves the image until it looks convincingly real, even though the person doesn’t actually exist.

Why Deep Fakes Are Dangerous

Like any powerful tool, deep fake technology can be used constructively or destructively. Sadly, it’s often exploited to superimpose a person’s face into explicit content, sometimes without their consent. Victims of such deep fakes—often women—have had their lives and careers severely impacted.

In other cases, fraudsters use AI-generated faces on social media or dating apps to create believable but fake personas. These “people” manipulate others emotionally and financially, with losses amounting to millions of dollars.

What Can Be Done?

Although deep fake tools are becoming more advanced, they still leave behind subtle digital traces. These indicators can often be identified by forensic analysts.

If you believe you’ve been targeted by deep fake images or videos, our professional analysis can help confirm whether the content is fabricated. In cases where it is, you can receive a verified report certifying the material as fake. This type of documentation can be critical for legal purposes, or to restore trust in personal or professional relationships.

For assistance or further information, don’t hesitate to reach out.

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *