Deepfakes and the death of truth

With just a few photos or seconds of video often taken from public social media profiles, AI can create a digital version of someone that looks and sounds alarmingly real. 

With just a few photos or seconds of video often taken from public social media profiles, AI can create a digital version of someone that looks and sounds alarmingly real. 
| Photo Credit: Getty Images/iStockphoto

Imagine waking up to find a video of yourself going viral. Only, it’s not really you. It shows your face, your voice, your expressions, but you never said or did what the video claims. You try to explain it’s fake, but the damage has already been done. People have seen it. For many women and young people, this isn’t just a nightmare. Deepfakes are a type of synthetic media created using artificial intelligence. They allow people to manipulate videos, audio, or images to make someone appear to say or do things they never did. With just a few photos or seconds of video often taken from public social media profiles, AI can create a digital version of someone that looks and sounds alarmingly real. While the technology itself is fascinating, its misuse is turning it into a serious threat.

At first, deepfakes were mostly used for fun, swapping faces with movie characters, creating celebrity voiceovers, or bringing historical photos to life. But it didn’t take long for things to take a darker turn. Today, deepfakes are being used to harass, mislead, scam, and manipulate and, more often than not, the victims are ordinary people.

Women are among the most vulnerable. Across the globe, including in India, women have found their faces inserted into explicit videos without consent. These deepfake pornographic clips are then circulated online, often anonymously. The victims are left to deal with shame, trauma, and judgment, sometimes even from those closest to them. What’s even more heartbreaking is that proving a video is fake doesn’t always fix the damage. Once something is out there, it spreads fast and the Internet rarely forgets.

Teenagers and students have also become easy targets. Some use deepfake apps to “prank” classmates by putting their faces on embarrassing or inappropriate videos. But what starts as a joke often causes lasting emotional harm. For a young person, one fake video can damage friendships, self-esteem, and even mental health. It doesn’t end with individuals. Deepfakes have reached the political world too. Fake videos have been created showing leaders saying inflammatory or false things, with the potential to cause unrest or discredit opponents. In one known case, a European company lost over ₹2 crore because a deepfake voice mimicking their CEO asked an employee to transfer funds. The voice was so convincing, the employee didn’t doubt it for a second.

What makes all this worse is how easy it has become to create a deepfake. Just a few years ago, it required technical skill and expensive tools. Today, anyone with a smartphone and an app can make one. The technology is advancing quickly, often faster than our ability to detect or stop deepfakes. The scariest part? Deepfakes make us doubt our own senses. We’ve always trusted what we can see and hear. But now, even that can be faked. This opens the door to something called the “liar’s dividend” — a situation where even real evidence can be dismissed as fake.

First, we need technology to fight technology. Just as AI creates deepfakes, it can also be used to detect them. Tech companies such as Microsoft and Meta are working on tools that can analyse videos and flag signs of manipulation. These tools should be available to the public so people can verify suspicious videos themselves.

Second, we need strong and specific laws. In India, while existing laws on cybercrime, defamation, and privacy may apply in some cases, there is no direct law that deals with deepfakes. We need legislation that criminalises harmful deepfakes, protects victims, ensures quick removal of content, and holds platforms accountable.

Third, awareness is key. Schools, colleges, and communities must include digital safety as part of education. People of all ages should learn to pause and think before believing or sharing what they see online. Young people especially need to understand that misusing this technology even as a joke can have serious consequences.

In a world full of digital lies, staying informed and human is our strongest defence.

[email protected]

Leave a Comment