Real or Artificial? The Trust Crisis Created by Deepfakes
(My article, published in Inc. Türkiye)
A phone rings. The voice on the other end is familiar. Maybe it is your boss, maybe your mother, maybe someone from your bank. The tone of voice, the emphasis, the pauses, even the small habits of speech are all exactly in place. You join a video meeting; on the screen are senior executives from your company. The images look real, the voices sound real, the setting feels real. You are asked to take urgent action. A payment, a data share, an approval, a password reset…
A few years ago, such a scene would have belonged only in a science fiction series. Today, we simply call it a deepfake. In other words, fake audio, images, or video content created or altered with artificial intelligence to look highly realistic.
When the concept of deepfake first emerged, it was mostly associated with entertainment, social media humor, and placing celebrities’ faces onto other videos. Recreating a younger version of an actor, making a politician appear to say something they never said, or imitating a friend’s voice seemed interesting, even funny at times. But as the technology advanced, the issue moved far beyond the boundaries of entertainment. Deepfake is no longer just a fake video created for amusement; it has become the source of a much larger problem that forces us to question how trust can be established in the digital world.
Until now, we verified many things in the digital world with our eyes and ears. The phrases “I saw it with my own eyes” and “I heard it with my own ears” were among the strongest foundations of trust. Yet today, our eyes and ears no longer provide evidence as strong as they once did in the face of technology. Does the image we see really belong to that person? Is the voice we hear really their voice? Does a video really show an event that happened, or is it a convincing copy produced with just a few minutes of data? The answers to these questions no longer concern only technology experts. They concern families, companies, politicians, journalists, lawyers, banks, consumers, and essentially everyone who lives in the digital world.
The Age When the Fake Is as Convincing as the Real
Behind deepfake technology are generative artificial intelligence models. These models improve themselves using large amounts of audio, image, and video data. They can analyze a person’s facial movements, expressions, speech rhythm, or vocal tone. Then, using this data, they can generate new content resembling that person.
In the past, creating something like this required expensive equipment, special effects expertise, and serious production resources. Today, however, many tools have made the process much easier and cheaper. It is becoming increasingly easy to imitate someone’s voice with a few minutes of audio recording, generate realistic images with a few photos, or use ready-made video templates to place someone inside a conversation they never actually attended.
From the perspective of democratizing technology, this accessibility may seem exciting. For filmmakers, educators, advertisers, content creators, and artists, entirely new creative possibilities are emerging. Documentary scenes can be prepared from archival footage of an artist who has passed away. Language barriers can be overcome; a speaker’s video can be translated into different languages with lip movements synchronized accordingly. Educational videos can be personalized. Faster and lower-cost content can be produced for brand communication. But like every powerful technology, deepfake is a double-edged sword. The same tools can be used not only for creative production, but also for fraud, disinformation, character assassination, and manipulation.
A 25-Million-Dollar Deception in a Video Meeting
One of the most striking examples that turned the danger of deepfakes from an abstract risk into a concrete reality took place in Hong Kong in 2024. An employee of Arup, a UK-based engineering company, carried out a money transfer of approximately 25 million dollars after what they believed was a video conference with senior executives. It was later revealed that the people in the meeting were not real executives, but deepfake images and voices created with artificial intelligence. The World Economic Forum also evaluated this incident as an important case showing how AI-supported cybercrime has entered a new dimension for companies.
This example explains the real danger of deepfakes very clearly. In the past, corporate fraud was dominated by fake emails, spoofed domains, rushed payment requests, or phishing links. Now, attackers do not merely send written messages; they can join a meeting by imitating an executive’s face and voice. Even “face-to-face communication,” one of the strongest trust signals in human psychology, can be manipulated.
This represents an important turning point for the business world. In many organizations, approval mechanisms still depend on human relationships, hierarchy, and habit. Justifications such as “The general manager said so,” “The finance director was in the meeting,” or “They said it was urgent” can override security protocols. In the deepfake era, organizations need to redesign not only their systems, but also their decision-making reflexes.
Rebuilding the Architecture of Trust
The fundamental issue is that we are still trying to establish trust in the digital world through old habits. We try to determine whether an email is real by looking at the sender’s name, whether a voice is real by trusting our ears, and whether a video is accurate by judging its visual quality. Yet in the age of generative artificial intelligence, all of these signals have become imitable. Therefore, the solution against deepfakes cannot consist only of developing “better deepfake detection tools.” Of course, technical detection systems are important. Systems that analyze pixel inconsistencies in videos, unnatural frequencies in audio, blinking patterns, light reflections, or metadata traces are improving. But fake content generation tools are advancing just as quickly. This is an ongoing race.
A more lasting solution is to connect trust not only to images and sound, but to multi-layered verification processes. For example, in companies, high-value money transfers should never be carried out based solely on approval in a video meeting. A second channel, pre-defined verification codes, independent callback procedures, multi-signature processes, and waiting mechanisms for unusual requests should be used. A similar reflex is needed in personal life as well: even if the voice of a relative asking for urgent money sounds familiar, calling back from a different number, verifying with a piece of shared knowledge, or confirming with another family member is no longer paranoia; it is digital hygiene.
“Not Believing” Is Also a Problem
Another interesting consequence of the deepfake era is this: as fake content increases, trust in real content also weakens. When a politician genuinely makes an inappropriate statement, they can say, “This is a deepfake.” When a real audio recording of a company executive emerges, the defense can be, “It was produced with artificial intelligence.” Even real footage of an event can be met with suspicion. In other words, as the ability to create fake content increases, denying the truth also becomes easier. Society faces not only the risk of believing the fake, but also the risk of not believing the real.
This makes media literacy much more critical. Digital citizenship in the future will require not only access to information, but also the ability to question the source, context, timing, and verifiability of that information. Before sharing a video, we need to check its source, avoid forming an opinion based on a single image, look for confirmation from reliable news organizations, and slow down a little when content pushes us to react emotionally and immediately. Because speed is deepfake’s favorite environment. It wants people to become angry, afraid, excited, and to share instantly. If a piece of content tells us, “You have to see this now,” “Share this before everyone else,” or “Shocking footage,” perhaps that is exactly the moment when we need to stop and think.
What Should We Do Against Deepfakes?
Deepfake risk is no longer only the responsibility of cybersecurity departments. For finance teams, payment approval processes need to be reviewed. Standard verification procedures should be created for urgent requests from senior executives. Employees should be trained not only on phishing emails, but also on audio and video manipulation. Corporate spokespeople need to be prepared to do more in a crisis than simply say, “This content does not belong to us.”
In the deepfake era, individuals also need to develop some simple habits. It is important not to immediately act on urgent money requests that come from a familiar voice, to verify unusual requests through another channel even during a video call, to wait before sharing highly striking videos on social media, and to inform children and elderly family members about voice-cloning scams.
The purpose here is not to make everyone suspicious and anxious. But if the nature of trust in the digital world is changing, our reflexes need to change as well.
Perhaps in the future, every photo, video, and audio recording will have a kind of digital identity card. Information such as “This content was captured on this device,” “It was edited on this date,” or “These sections were altered using artificial intelligence” will become more visible. But it will take time for such systems to become widespread. Besides, malicious actors will always look for off-the-record ways to produce content.
That is why technology alone will not be enough. Law, education, media, corporate culture, and individual awareness will have to progress together. Until now, we used to say, “Seeing is believing.” Perhaps from now on, we will need to say, “Verifying is believing.” This may sound cold and mechanical, but it is the new reality of the digital age. Deepfake technology does not have to drag us into pessimism. On the contrary, it is a powerful warning that teaches us to behave more consciously, carefully, and responsibly in the digital world. Just as we developed new habits against spam emails, fake websites, and phishing attacks in the early years of the internet, we will now develop new reflexes against AI-powered deception.
Mustafa İÇİL
