Understanding India's New Rules on AI-Generated Content and Deepfakes
Essential brief
Understanding India's New Rules on AI-Generated Content and Deepfakes
Key facts
Highlights
On February 10, 2026, the Ministry of Electronics and Information Technology (MeitY) in India introduced significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. These changes explicitly extend the law's scope to cover "synthetically generated information," a term that includes AI-generated content and deepfakes. This regulatory update reflects the government's proactive approach to addressing the challenges posed by rapidly advancing artificial intelligence technologies and their impact on digital media.
The amended rules impose stricter compliance requirements on online platforms hosting AI-generated or synthetic content. One of the key mandates is the clear labelling of such content to ensure transparency for users. Platforms must also maintain permanent metadata associated with synthetic content, which helps in tracing the origin and verifying authenticity. These measures aim to combat misinformation, manipulation, and potential misuse of AI-generated media that can mislead the public or harm individuals.
In addition to labelling and metadata requirements, the government has introduced faster takedown timelines for AI-generated content that violates laws or guidelines. This accelerated response framework compels intermediaries to act swiftly in removing harmful or unlawful synthetic content, reducing the window in which such content can spread. The rules also emphasize accountability for platforms, requiring them to implement robust mechanisms for monitoring and managing AI-generated information.
These regulatory changes come amid growing global concerns about the misuse of AI in creating deceptive media, including deepfakes that can impersonate individuals or fabricate events. By mandating transparency and quicker enforcement actions, the Indian government seeks to safeguard public discourse and maintain trust in digital platforms. The amendments also encourage platforms to develop better detection and verification technologies to comply with the new norms.
The implications of these rules are significant for social media companies, content creators, and users in India. Platforms will need to invest in technical infrastructure and policy frameworks to identify synthetic content and ensure compliance. Content creators using AI tools must be aware of the labelling requirements to avoid penalties. For users, these rules aim to provide clearer context about the nature of the content they consume, enhancing digital literacy and informed decision-making.
Overall, India's updated intermediary guidelines represent a critical step in regulating the evolving digital landscape shaped by AI technologies. By addressing the challenges of synthetic media proactively, the government is setting a precedent for responsible AI governance that balances innovation with public safety and ethical considerations.