Skip links

What is a Deepfake?

Deepfake is when artificial intelligence creates a video or audio in which a person looks and sounds like they are saying or doing something they never actually said or did.

In other words: it’s not just “photo montage” anymore — it’s photo montage that talks, blinks, and convinces you that you were the one who got it wrong.

How it works

Imagine two neighbors:

  • One neighbor constantly makes fake pictures.
  • The other neighbor constantly tries to catch him in the lies.

Eventually, the first neighbor becomes so good that even the second no longer knows whether what he’s seeing is real or just the neighbor’s garage AI creation. That’s the essence: AI trains AI. One makes, the other checks — until it becomes very hard to tell truth from fake.

Just five seconds of your voice…

What once sounded like science fiction is now real: just a few seconds of your voice are enough to create a convincing copy. This means that while you say “hey bro,” someone could already be using your voice in a phone call — sounding and acting like you, but it’s not you.

Examples in the world

These aren’t just theories — well-known examples exist publicly (e.g., a deepfake of Queen Elizabeth II, Barack Obama, etc.) — and if it can be done with world leaders and institutions, it can also be done with your boss, professor, or any of us.

Fraud as a service

In the past, serious fraud took skill, time, and a team. Today, there are tools that offer it as a ready-made “service.” It’s like ordering food — but instead of pizza, you get a fake identity.

This is especially dangerous when such technologies are used to bypass bank identity checks or commit financial fraud — you think you’re talking to a colleague, but it’s actually a digital actor that learned their movements and voice.

The biggest trick: it’s not technology — it’s trust

The problem with deepfakes isn’t just that they can create fake videos. It’s that over time we start doubting everything. Once we learn deepfakes exist, even real videos get dismissed as “just montage.”
Result? A kind of truth fatigue: “Everyone lies.” And when people stop believing anything, the loudest voice wins.

Who’s at risk?

It’s not only celebrities. Targets include:

  • companies and finances,
  • elections and social issues,
  • private lives and reputations,
  • banks and identity systems.

Example: if someone sends a video message in a residents’ group showing the building manager changing the account for payments — it looks real and sounds real — people might pay before realizing the manager never actually recorded such a video.

It’s not a hacked system — it’s hacked trust.

How to defend yourself (realistically)

There’s no magic button, but here are good habits:

  • Check sources: if something is shocking, verify it with at least two other outlets.
  • Use a second communication channel: if a director requests a risky transfer, confirm it through a known phone number.
  • Pause before reacting: if something pushes you to act immediately, that’s often exactly what the attacker wants.

Technology evolves — but critical thinking is a high-power antivirus without electricity.