MeitY invites public comments on AI Governance Guidelines; recommends comprehensive framework to tackle cybersecurity, deepfakes, bias and IP challenges in India’s emerging AI ecosystem

The Ministry of Electronics and Information Technology (MeitY) has invited public comments on the Subcommittee’s report on the Development of AI Governance Guidelines. Feedback is requested by 27th January 2025 through the designated link.

Key Highlights:

Gap Analysis:

A gap analysis of existing frameworks is necessary to address key areas such as:

  • Deepfakes/Fake Content/Malicious Content
  • Cybersecurity
  • Intellectual Property Rights
  • AI-Induced Bias and Discrimination

Existing Legal Framework:

  • Align with legal frameworks to mitigate risks related to malicious synthetic media and cybersecurity breaches.
  • Information Technology Act, 2000: Prohibits cheating by personation (Section 66D), capturing and publishing private images without consent (Section 66E), and transmitting obscene material (Sections 67A, 67B);
  • Indian Penal Code (“IPC”) and Bharatiya Nyaya Sanhita (“BNS2”): Cover identity theft, forgery, and defamation (e.g., Sections 419, 465, and 499);
  • Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Prevents dissemination of harmful content (Rule 3(1)(b)); notify users of non-compliance consequences (Rule 3(1)(c)); Remove impersonated or morphed content within 24 hours (Rule 3(2)(b)).
  • CERT-IN (Section 70B) and NCIIPC (Sections 70, 70A) for incident reporting and critical infrastructure protection;
  • Digital Personal Data Protection Act (“DPDPA”): Mandates robust data security safeguards;
  • Ensure AI systems are “secure by design” and comply with existing cybersecurity norms;
  • Equip corporates with tools to detect, prevent, and report malicious AI-generated media and cybersecurity incidents promptly;
  • Upgrade compliance frameworks and enforcement mechanisms to address AI’s evolving risks;

Proposed AI Governance Principles:

  1. Transparency: AI systems should provide clear information about their development, processes, capabilities, and limitations. They should be interpretable and explainable where appropriate. Users must be informed when interacting with AI.
  2. Accountability: Developers and deployers are responsible for the AI system’s function, outcomes, and user rights. Accountability mechanisms should be in place to clarify responsibility.
  3. Safety, Reliability, and Robustness: AI systems must be developed and used safely, reliably, and robustly, minimizing risks, errors, and misuse. Regular monitoring should ensure systems meet specifications and perform as intended.
  4. Privacy and Security: AI systems should comply with data protection laws and respect user privacy. They should be designed with data quality, integrity, and security in mind.
  5. Fairness and Non-Discrimination: AI systems should be inclusive, fair, and free from bias or discrimination. They should promote equality for all individuals, communities, and groups.
  6. Human-Centred Values and “Do No Harm”: AI systems should undergo human oversight and intervention when needed to prevent excessive reliance on AI and address complex ethical issues. They should align with the rule of law and mitigate negative societal impacts.
  7. Inclusive and Sustainable Innovation: AI should be developed and deployed to ensure equitable distribution of benefits, contributing to positive outcomes and sustainable development goals for all.
  8. Digital Governance: AI governance should utilize digital technologies to improve regulatory systems and ensure compliance with laws and principles through appropriate technological and legal measures.

Key Concepts for Operationalizing AI Governance:

  1. Examining AI systems using a lifecycle approach: A lifecycle approach to AI systems is essential for effectively implementing principles and managing risks, as these risks vary across different stages of a system’s lifecycle. The approach includes three broad stages: development, deployment, and diffusion. The development stage involves the design, training, and testing of the system, while the deployment stage focuses on putting the AI system into operation. The diffusion stage takes a long-term perspective, considering the widespread use and implications of multiple AI systems across various sectors. To ensure comprehensive governance, it is important to consider all stages of the lifecycle when operationalizing AI principles.
  2. Taking an ecosystem-view of AI actors: AI systems involve multiple actors throughout their lifecycle, forming an ecosystem. Key actors include Data Principals, Data Providers, AI Developers, AI Deployers, and End-users. Traditional governance often focuses on individual actors, but a more effective approach considers the entire ecosystem, promoting better outcomes. This holistic view helps clarify the distribution of responsibilities and liabilities among the different actors.
  3. Leveraging technology for governance: The rapidly expanding ecosystem of AI models, systems, and actors presents challenges for governance due to its complexity, speed, and evolving nature. Traditional “command-and-control” governance strategies may not effectively manage this space. A proposed solution is a “techno-legal” approach, integrating legal and regulatory frameworks with technological tools to support governance, scale compliance, and enhance monitoring. This approach would involve technologies to track regulatory obligations and liabilities across the AI value chain, encouraging self-regulation among ecosystem players. As a starting point, there is merit in examining how technology artefacts, similar to the concept of “consent artefacts” already proposed by MeitY in their Electronic Consent Framework.

Background:

In recognition of the need for an India-specific approach to AI governance, a multistakeholder Advisory Group was constituted to undertake development of an ‘AI for India-Specific Regulatory Framework’. Under the guidance of the Advisory Group, a Subcommittee on ‘AI Governance and Guidelines Development’ was constituted to provide actionable recommendations for AI governance in India. Based on its extensive deliberations, the report outlines a series of recommendations that aim to shape the future of AI governance in India. These recommendations are based on a careful review of the current legal and regulatory context and reflect the Subcommittee’s independent perspective on fostering AI-driven innovation while safeguarding public interests.

The Report is linked below for your ease of reference.

Source: Ministry of Electronics and Information Technology

Share this:

Sign up for our

Newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Lexplosion will use the information you provide on this form to be in touch with you and to provide updates and marketing.