This livestream is restricted
Already a member? Login with your membership email address
Dec 2025
1 Mon
2 Tue
3 Wed
4 Thu 09:00 AM – 05:15 PM IST
5 Fri
6 Sat
7 Sun
Aman Taneja
@ataneja Presenter
Submitted Dec 3, 2025
The Indian government has publicly articulated an innovation-first approach to AI governance, favouring light-touch, voluntary frameworks while leveraging amendments to existing laws to address emerging AI risks.
Synthetically generated content and deepfakes, in particular, have emerged as the most visible and politically sensitive manifestations of AI-related harms. From manipulated videos of public figures in compromising situations to their use in electoral contexts, deepfakes have drawn sustained regulatory and public scrutiny.
On 22 October 2025, the Ministry of Electronics and Information Technology (MeitY) published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information (“Draft Amendments”). These were open for public consultation until 13 November 2025. However, MeitY has not yet finalised the framework — leaving significant space for further engagement with the government and for grounded stakeholder input.
The Draft Amendments introduce obligations for labelling synthetically generated information (SGI) under IT Rules 2021. Key proposals:
Covers any information created, altered, or modified by computer or algorithmic means such that it “reasonably appears to be authentic or true.”
This broad definition potentially captures:
Intermediaries offering AI-powered content creation or modification must:
Platforms with 50 lakh+ registered users must:
Industry submissions (NASSCOM, BSA, IFF) support regulating harmful deepfakes but call for narrower, risk-based rules focused on high-risk audiovisual content. Key concerns:
The SGI definition is too broad, pulling routine algorithmic edits and text generation into compliance.
Mandatory 10% visual/audio labelling is viewed as impractical, rigid, and out of sync with global standards.
Industry recommends:
Countries such as the EU, Japan, Singapore, and Australia are exploring/implementing open technical frameworks for content authenticity that allow space for satire, art, journalism, and accessibility use cases.
Hosted by
Supported by
Masterclass sponsorship
Round table partners
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}