About a month ago, a deepfake video featuring Sudha Murty, chairperson of Infosys Foundation, has been circulating online, falsely portraying her as endorsing an investment platform named "Quantum AI." The manipulated video claims that Murty is promoting a scheme enabling individuals to earn substantial daily income through AI Trading Platform. Investigations have confirmed that this video is fabricated, utilizing AI technology to misrepresent Murty's image and voice.
Recent incidents have highlighted the misuse of deepfake technology to create fraudulent videos featuring prominent Indian figures like Sudha Murty and Mukesh Ambani, falsely portraying them as endorsing investment platforms such as "Quantum AI." These manipulated videos aim to deceive viewers into believing that these respected individuals are promoting schemes promising substantial financial returns.
Similarly, Narayana Murthy, founder of Infosys, has been targeted by deepfake videos falsely suggesting his endorsement of automated trading applications. He has publicly denied any association with such platforms and cautioned the public against falling prey to these fraudulent schemes.
These incidents underscore the growing misuse of deepfake technology to deceive individuals by misrepresenting trusted public figures. The public is advised to exercise caution and verify the authenticity of online content, especially when it involves financial investments and purported endorsements from reputable personalities.
Deepfakes are digitally manipulated media—videos, images, or audio—that use artificial intelligence (AI) to replace or alter someone's face, voice, or actions, making it appear as though they said or did something they never actually did. The term deepfake is derived from deep learning, a type of AI that enables computers to analyze and synthesize realistic human expressions, voices, and movements.
How Do Deepfakes Work?
Deepfakes are typically created using Generative Adversarial Networks (GANs) or other AI techniques that analyze large amounts of real video or audio data and generate hyper-realistic imitations.
Types of Deepfakes:
1. Face-Swapping Videos – Replacing one person's face with another's in a video.
2. Synthetic Audio – AI-generated voice imitations that sound like real people.
3. Text-to-Video Deepfakes – AI-generated videos where people appear to say words they never actually spoke.
4. Lip-Sync Deepfakes – AI manipulations that sync a person’s lips to an entirely new dialogue.
Are Deepfakes Dangerous?
Yes, deepfakes can be used for fraud, misinformation, blackmail, and scams. They have been used to:
1. Spread fake news and political misinformation.
2. Commit financial fraud by impersonating business executives.
3. Damage reputations by creating fake celebrity or influencer videos.
How to Detect Deepfakes?
Look for unnatural facial movements (blinking, lip-sync issues).
Check for inconsistencies in lighting and shadows.
Observe unnatural voice modulations or distortions.
Use AI detection tools designed to identify manipulated media.
Governments and tech companies are now working on AI detection systems and legal regulations to combat deepfake misuse.
Such fraudulent content exploits the credibility of well-known personalities to promote non-existent investment platforms, leading to financial scams. In response to these fraudulent activities, Narayana Murthy publicly clarified that he has no association with any automated trading applications and cautioned the public against falling prey to such malicious sites. He emphasized the importance of vigilance and urged individuals to report such instances to the relevant regulatory authorities.
These incidents highlight the urgent need for increased awareness and skepticism towards online content, especially when it involves financial investments and purported endorsements from public figures. Verifying the authenticity of such information through official channels is crucial to protect oneself from potential scams facilitated by advanced technologies like deepfakes.
The misuse of AI to create fraudulent videos poses a significant threat to public trust and financial security. As deepfake technology advances, staying vigilant and informed is crucial to avoiding scams. Sudha Murthy’s case serves as yet another reminder that online endorsements—especially those involving financial investments—must be thoroughly verified before taking action.
Authorities continue to work towards stricter regulations and awareness campaigns to curb the spread of deepfake scams. Meanwhile, individuals must remain cautious and skeptical of investment schemes that seem too good to be true.
© Copyright 2024. All Rights Reserved Powered by Vygr Media.