The Indian government has proposed new legal rules to make it mandatory for social media platforms and artificial intelligence (AI) companies to clearly label content created using AI. This move aims to control the spread of "deepfakes"—realistic but fake videos, images, or audios—and reduce the risks of misinformation, harm, and manipulation among India's almost one billion internet users.
The Ministry of Information Technology introduced these draft amendments to the Information Technology Rules, 2021, due to concerns that AI tools, while powerful for creativity and innovation, can also be misused. Fake AI-generated content can spread false information, damage reputations, influence elections, or cause financial fraud. In India’s diverse society with many ethnic and religious groups, such misleading content can even incite serious conflict.
According to the proposal, any AI-generated image, video, or audio shared online must have clear labels or markers to inform users that the content is synthetic. For visuals, these labels need to cover at least 10% of the image area, and for audio or video, they must be visible during the first 10% of the clip. Platforms like Facebook, YouTube, and Twitter will be required to verify that users declare whether what they share is AI-generated, deploy technical measures to check these claims, and prevent altering or hiding the labels. Failure to comply could lead to losing legal protections that currently shield these platforms from liability.
Experts see this as an important step toward ensuring authenticity in digital media. Clear labeling helps users distinguish real content from synthetic and supports the responsible use of AI. However, implementing these rules will require cooperation between the government, technology companies, and the public to create practical standards and frameworks.
Some concerns remain about enforcement and the balance between regulation and innovation. While the labels will increase transparency, questions exist about how quickly platforms can adapt and whether users will always heed such warnings. The government has invited comments from the public and industry stakeholders until November 6, 2025, before finalizing the rules.
This move reflects a growing global trend. Similar regulations, like those in the European Union and China, focus on transparency to curb the harms associated with AI-generated deepfakes.
In conclusion, India’s proposed stricter labeling of AI-generated content seeks to protect users from digital deception without halting technological progress. It is a measured effort to build trust and safety in the rapidly evolving world of AI, reminding everyone that with new power comes new responsibilities. Users are encouraged to remain cautious about what they see and share online, while platforms must take stronger accountability for the content they host.
With inputs from agencies
Image Source: Multiple agencies
© Copyright 2025. All Rights Reserved. Powered by Vygr Media.