Blog Banner
2 min read

When Sadhguru Says ‘Invest,’ Bengaluru Lady Takes a ₹3.75 Crore Leap of Faith

Calender Sep 16, 2025
2 min read

When Sadhguru Says ‘Invest,’ Bengaluru Lady Takes a ₹3.75 Crore Leap of Faith

A 57-year-old retired woman from Bengaluru lost Rs 3.75 crore after being defrauded through an AI-generated deepfake video of spiritual leader Sadhguru Jaggi Vasudev. The video falsely showed Sadhguru endorsing a stock trading platform and promising huge profits with a small investment of $250. Believing this to be genuine, the woman invested large sums over a period of months, only realizing the scam when she was unable to withdraw her funds. This case highlights the growing dangers posed by sophisticated AI-driven scams leveraging celebrity deepfakes, raising urgent questions about digital literacy, regulation, and online security.

Between late February and April 2025, the Bengaluru resident, Varsha Gupta (name changed in some reports), came across a YouTube reel showing a convincing AI-generated video of Sadhguru. In the video, the spiritual guru appeared to promote a trading platform called Mirrox, urging viewers to invest just $250 to improve their finances. Alongside the video, a clickable link directed her to register by submitting personal details including name, email, and phone number.

She was soon contacted by a man identifying himself as Waleed B, a purported company representative who gave trading lessons via Zoom. Another accomplice, Michael C, also assisted in guiding her investments through the Mirrox app. Gradually, over several transactions conducted via bank accounts and credit cards, she transferred a total ₹3.75 crore into the scammers’ accounts.

When the victim attempted to withdraw profits, the site blocked her requests, revealing the fraudulent nature of the scheme. She approached the police five months later to report the matter. Authorities have registered the complaint and are investigating under cybercrime laws.

Deepfake tech uses artificial intelligence and deep learning algorithms to create highly realistic yet fabricated videos or audio recordings of real individuals. These fake videos can show someone saying or doing things they never did. In this scam, the fraudsters exploited AI to generate a video of Sadhguru endorsing an investment platform, deceiving even those unfamiliar with such technology.

The Delhi High Court, earlier in 2025, granted protection orders for Sadhguru’s personality rights against unauthorized AI-generated content, illustrating how deepfakes have become a prevalent problem for public figures.

This incident is part of a disturbing rise in digital frauds using AI technology. Criminals now leverage fake videos and voices to impersonate celebrities, officials, or trusted personalities to steal money, identities, or sensitive information. Scams can range from fake investment schemes to impersonation of law enforcement threatening arrests to extort “fines.”

The Telecom Regulatory Authority of India (TRAI) recently warned of growing cyber frauds, especially cases where fraudsters impersonate officials or use fake legal notices to pressure victims into payments. The "digital arrest" scam, involving fake court hearings and staged calls, is another sophisticated example.

For ordinary citizens, the challenge lies in recognizing such scams and resisting manipulation by highly convincing but false digital media.

  • Awareness of deepfakes: Many victims, like the Bengaluru woman, are unaware of how AI can forge videos or voices. Increasing digital literacy around deepfake technology is crucial.

  • Verification before investing: Always verify the legitimacy of investment platforms and offers, especially those promoted by celebrities or on social media videos.

  • Cautious sharing of personal data: Providing names, emails, phone numbers, or banking information on unfamiliar sites can lead to identity theft or financial loss.

  • Report early: Victims should report suspicious activities or scams immediately to prevent further losses.

The Bengaluru case underscores how AI-driven frauds using deepfake videos are evolving into a serious threat to public finances and trust online. While technology enables powerful tools, it also requires heightened vigilance and regulatory measures to protect individuals, especially vulnerable populations. With rising cases, this incident sends a clear message to be extra cautious about financial offers encountered online, even if presented by seemingly familiar faces like celebrity figures. Educating people about these digital risks must become a top priority for communities and authorities alike to prevent such costly deceptions in the future.

With inputs from agencies

Image Source: Multiple agencies

© Copyright 2025. All Rights Reserved. Powered by Vygr Media.

    • Apple Store
    • Google Play