Centre Amends IT Rules to Mandate AI Content Labels and E...
Tech Beetle briefing IN

Centre Amends IT Rules to Mandate AI Content Labels and Expedite Deepfake Takedowns

Essential brief

Centre Amends IT Rules to Mandate AI Content Labels and Expedite Deepfake Takedowns

Key facts

The Indian government has amended the IT Rules 2026 to mandate clear labeling of AI-generated content on digital platforms.
The takedown window for deepfake content has been reduced to two hours, requiring faster action from social media companies.
The initial proposal for large, fixed-size watermarks on AI content was dropped in favor of more flexible labeling requirements.
Social media platforms must enhance content moderation and grievance redressal mechanisms to comply with the new rules.
These changes aim to increase transparency, reduce misinformation, and protect users from harmful synthetic media online.

Highlights

The Indian government has amended the IT Rules 2026 to mandate clear labeling of AI-generated content on digital platforms.
The takedown window for deepfake content has been reduced to two hours, requiring faster action from social media companies.
The initial proposal for large, fixed-size watermarks on AI content was dropped in favor of more flexible labeling requirements.
Social media platforms must enhance content moderation and grievance redressal mechanisms to comply with the new rules.

In a significant move to regulate digital content, the Indian government has notified the amended Information Technology (IT) Rules, 2026, aimed at enhancing oversight over social media platforms and online intermediaries. These amendments introduce a stricter compliance framework, particularly focusing on the proliferation of deepfakes and AI-generated content. The revised rules mandate that platforms clearly label AI-generated content to ensure transparency and help users discern authentic information from synthetic media.

One of the key changes in the new IT Rules is the reduction of the takedown window for deepfake content. Previously, platforms were given a longer timeframe to remove such harmful content, but under the amended rules, the deadline has been shortened to just two hours from the time of receiving a complaint. This accelerated response requirement underscores the government's intent to swiftly curb the spread of misleading or manipulated media that can cause significant harm to individuals and society.

The government initially proposed that AI-generated content should carry large, fixed-size watermarks to make such content easily identifiable. However, this requirement has been dropped in the final version of the rules. Instead, the focus is on mandating clear labels that indicate the synthetic nature of the content without imposing rigid watermarking standards. This shift likely reflects a balance between regulatory oversight and the practical challenges of implementing uniform watermarking across diverse platforms and content types.

These amendments also expand the scope of accountability for social media intermediaries, requiring them to implement more robust mechanisms for content moderation and user grievance redressal. Platforms like X (formerly Twitter), Facebook, Instagram, and others will need to enhance their monitoring capabilities to comply with the new timelines and labeling requirements. The rules aim to create a safer digital environment by reducing the circulation of deceptive content, which has been a growing concern amid the rise of AI-generated media.

The implications of these changes are far-reaching. For users, the labeling of AI content promotes greater awareness and critical evaluation of online information. For social media companies, the accelerated takedown timeline and compliance obligations necessitate investments in technology and human resources to detect and act on deepfake content promptly. Moreover, the removal of the fixed watermark mandate may encourage innovation in how platforms identify and disclose synthetic content, possibly leading to more user-friendly and less intrusive methods.

Overall, the amended IT Rules 2026 represent a proactive step by the Indian government to address the challenges posed by emerging technologies like AI in the digital content ecosystem. By enforcing transparency and swift action against harmful synthetic media, the regulations seek to protect users and uphold the integrity of information shared online. As AI-generated content continues to evolve, these rules may serve as a model for other jurisdictions grappling with similar issues.