Policing the digital economy requires what might seem disproportionate resources, and a recent case in Hyderabad illustrates why. A retired doctor was persuaded to invest more than ₹20 lakh after viewing a video on Instagram, in which Union Finance Minister Nirmala Sitharaman appeared to endorse an investment scheme. The video was a deepfake. Similar videos have been in circulation, featuring other public figures, to lend credibility to fraudulent cryptocurrency platforms. Such scams exploit the limited technical literacy of the wider population, regulatory gaps in cryptocurrency trading, the new use of Artificial Intelligence (AI)-generated deepfakes, and the limited response of social media platforms. Despite wide smartphone penetration, many users are still unable to identify online manipulation, and are further motivated by the promise of rapid profits and fabricated evidence of gains. Complaints often arise only after attempts to withdraw returns are blocked. Public awareness campaigns remain uneven and often general, leaving many people vulnerable to scams that use increasingly sophisticated forms of deception. Most countries, including India, also do not yet classify them with the same clarity as conventional securities, creating an environment where fraudsters operate with impunity. Many are hosted abroad, operate through complex chains of wallets, and can disappear overnight. While police units have developed capacity, their reach stops at national borders.
Social media platforms, which serve as the principal channel for these scams, often respond passively. While companies such as Instagram publish advisories on avoiding scams and offer reporting mechanisms, fraudulent videos and accounts remain accessible until removed. The policies of platforms emphasise user self-protection rather than proactive detection. This means that scams circulate long enough to entrap victims before takedown requests are processed. The scale of global content slows manual review while automated moderation systems remain limited in detecting manipulated videos. As they are private entities profiting from user engagement, platforms prefer to avoid sustained monitoring that would involve intrusive scrutiny of user uploads. The result is that deepfake scams are treated as individual incidents rather than systemic vulnerabilities. Three measures are necessary. First, governments must define standards for registration, disclosure, and cross-border cooperation to limit the space in which fraudulent schemes operate. Second, technical literacy must be treated as a public policy priority. Awareness efforts should be continuous and supported by educational institutions, rather than limited to periodic campaigns by police units. Third, social media platforms should be required to remove fraudulent content proactively. Without these, such scams will entail huge human and material costs.
Published – September 13, 2025 12:10 am IST