Unavailable

This livestream is restricted

Already a member? Login with your membership email address

Tickets

Loading…

Aman Taneja

@ataneja Presenter

Shaping India’s AI regulations: A dialogue on MeitY’s draft IT Rules on Synthetic Content

Submitted Dec 3, 2025

Background note - Shaping India’s AI regulation: a dialogue on MeitY’s draft IT Rules on synthetic content


I. Background

The Indian government has publicly articulated an innovation-first approach to AI governance, favouring light-touch, voluntary frameworks while leveraging amendments to existing laws to address emerging AI risks.

Synthetically generated content and deepfakes, in particular, have emerged as the most visible and politically sensitive manifestations of AI-related harms. From manipulated videos of public figures in compromising situations to their use in electoral contexts, deepfakes have drawn sustained regulatory and public scrutiny.

On 22 October 2025, the Ministry of Electronics and Information Technology (MeitY) published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information (“Draft Amendments”). These were open for public consultation until 13 November 2025. However, MeitY has not yet finalised the framework — leaving significant space for further engagement with the government and for grounded stakeholder input.


II. What the draft amendments propose

The Draft Amendments introduce obligations for labelling synthetically generated information (SGI) under IT Rules 2021. Key proposals:

a. Definition of “Synthetically Generated Information” (draft Rule 2(wa))

Covers any information created, altered, or modified by computer or algorithmic means such that it “reasonably appears to be authentic or true.”
This broad definition potentially captures:

  • obviously doctored memes
  • highly realistic AI-generated videos
  • routine edits and transformations

b. Labelling requirements at point of creation (draft Rule 3(3))

Intermediaries offering AI-powered content creation or modification must:

  • ensure such content is prominently labelled or embedded with a permanent unique identifier (metadata)
  • apply prescriptive labelling: 10% of visual area or 10% of audio duration

c. Enhanced obligations on Significant Social Media Intermediaries (SSMIs) (draft Rule 4(1A))

Platforms with 50 lakh+ registered users must:

  1. Require users to declare whether content is SGI
  2. Deploy technical measures to verify such declarations
  3. Ensure SGI content is clearly labelled when published

III. What the session intends to cover

a. Scope of SGI and everyday AI use

  • Impact of the broad SGI definition on routine AI tasks: editing, filters, translation, accessibility, generative tools.

b. Technical feasibility of mandatory labelling & provenance

  • Practical challenges in implementing visible labels, metadata, and watermarking across tools and platforms.

c. Global standards vs. indigenous provenance systems

  • Comparisons with global standards such as C2PA
  • Whether India should align or build domestic frameworks

d. User awareness & digital literacy

  • Role of interface design and public education in helping users distinguish synthetic from authentic content

e. Long-term implications for India’s AI governance

  • How prescriptive labelling norms may shape future AI regulation
  • Sustainability of these rules as AI capabilities evolve

IV. Industry feedback & concerns raised

Industry submissions (NASSCOM, BSA, IFF) support regulating harmful deepfakes but call for narrower, risk-based rules focused on high-risk audiovisual content. Key concerns:

  • The SGI definition is too broad, pulling routine algorithmic edits and text generation into compliance.

  • Mandatory 10% visual/audio labelling is viewed as impractical, rigid, and out of sync with global standards.

  • Industry recommends:

    • flexible provenance tools
    • invisible watermarking
    • metadata
    • cryptographic signatures
    • alignment with C2PA and international frameworks

Countries such as the EU, Japan, Singapore, and Australia are exploring/implementing open technical frameworks for content authenticity that allow space for satire, art, journalism, and accessibility use cases.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid Access Ticket

Hosted by

Jumpstart better data engineering and AI futures

Supported by

Masterclass sponsorship

Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation.

Round table partners

Law and public policy firm with a sharp focus on tech and innovation