MeitY issues India AI Governance Guidelines introducing a coordinated oversight structure

The Ministry of Electronics and Information Technology (“MeitY”) has issued the India AI Governance Guidelines (“Guidelines”), setting out a national framework to govern how Artificial Intelligence (“AI”) systems are developed, deployed and supervised across the country. This seeks to promote voluntary compliance as a first step, following which binding legal enforcement may be necessary. MeitY may publish a schedule to ensure compliance with these measures in the next 9 to 12 months.

Key Highlights:

  1. The Guidelines are structured in four parts:
    1. Part 1 sets out seven sutras grounding India’s AI governance philosophy – Trust, People First, Innovation over Restraint, Fairness & Equity, Accountability, Understandable by Design, and Safety, Resilience & Sustainability. These are technology-agnostic and apply across all sectors.
    2. Part 2 examines key issues and provides recommendations across six pillars such as – infrastructure, capacity building, policy and regulation, risk mitigation, accountability and institutions.
    3. Part 3 presents a short-, medium- and long-term action plan to operationalise these recommendations through a whole-of-government approach supported by the AI Governance Group, the Technology & Policy Expert Committee, and the AI Safety Institute (“AISI”) for technical validation and safety research.
    4. Part 4 provides practical implementation guidelines for industry and regulators to ensure responsible and consistent adoption of the governance framework.
  1. Earlier, AI governance in India operated in a decentralised and uncoordinated manner. The Guidelines now establish a structured oversight mechanism by creating a central policy body for AI, supported by a technical expert group and a dedicated safety institute, ensuring that policymaking, technical evaluation and safety testing operate in a unified manner rather than in separate silos.
  2. Earlier, accountability for harm caused by AI systems was not structured. The Guidelines now introduce a detailed accountability framework requiring:
    1. Mandatory transparency measures, including:
      • Disclosure of how AI systems make decisions.
      • Risk mitigation steps undertaken.
      • Information on data sources, model behaviour and safety evaluations.
      • Periodic transparency reports to regulators.
    1. Organisational compliance mechanisms, including:
      • Internal policies covering AI use, development and deployment.
      • Self-certifications on adherence to safety and fairness measures.
      • Third-party audits, where required, for high-impact systems.
      • Maintenance of audit trails for decisions influenced by AI.
    2. Grievance redressal obligations, ensuring:
      • A designated channel for reporting AI-related harm.
      • Processing and closing complaints within defined timelines.
      • Integrating user feedback into risk mitigation actions.
      • Ensuring multi-language accessibility for users.
    3. Enforcement through existing laws, with agencies empowered to:
      • Seek information from developers and deployers.
      • Investigate misuse of AI resulting in statutory violations.
      • Take action for non-compliance with safety or reporting obligations.
  1. Earlier, India had no structured process to capture real-world AI harms. The Guidelines now lay the foundation for a national AI incident reporting system. This will enable companies, regulators and individuals to report incidents such as misinformation, discrimination, system failures, safety lapses or misuse, and will allow government agencies to analyse patterns, identify vulnerabilities and respond to emerging risks in a timely manner.
  2. Earlier, organisations followed inconsistent technical safeguards. The Guidelines now formalise a technical safety regime that includes:
    1. Development of standardised safety tools, including:
      • Watermarking and authenticity markers for AI-generated content.
      • Bias detection tools.
      • Evaluation datasets representing Indian demographics.
      • Privacy-preserving training tools.
    1. Model testing and evaluation protocols, covering:
      • Red teaming against misuse.
      • Stress-testing for fairness and reliability.
      • Impact assessments for high-risk applications.
      • Regular monitoring for drift or unintended behaviour.
    2. Guidance for compliance-by-design, ensuring:
      • Required safeguards are built directly into technology.
      • Automated methods support regulatory compliance.
  1. Earlier, roles of developers, deployers and users in the AI value chain were unclear. The Guidelines now define expectations for each function as follows:
    1. Developers must:
      • Ensure baseline safety, fairness and security testing.
      • Disclose known risks and limitations.
      • Provide technical documentation to deployers.
    1. Deployers must:
      • Assess whether the AI system is fit for its intended use.
      • Put human-oversight checks where required.
      • Maintain user-facing grievance and communication channels.
    2. Users must:
      • Use AI systems responsibly and within lawful limits.
      • Report misuse or anomalies through designated channels.
  1. Earlier, the legal treatment of AI-related harms relied on general provisions across multiple laws without clarity on their applicability to AI systems. The Guidelines now provide a clearer legal orientation by confirming that existing statutes covering IT governance, data protection, consumer protection, copyright, criminal law and sector-specific regulations continue to apply to AI systems and their deployers. The Guidelines further highlight the need to review and amend certain laws to address gaps relating to intermediary liability, copyright exceptions for model training, protection of vulnerable groups, content authentication, and obligations for high-risk AI use cases.
  2. Earlier, government agencies approached AI issues independently. The Guidelines now reinforce a coordinated “whole of government” approach, under which ministries, sector regulators, standards bodies and enforcement agencies are expected to work in alignment with the central AI policy group.
  3. Annexure 5 outlines the range of voluntary frameworks that organisations can adopt to promote responsible AI development and deployment. It covers industry-led principles, technical guidelines, self-assessment templates, transparency practices and options for independent audits. These voluntary measures provide practical tools for improving safety, fairness, and accountability even before mandatory regulations come into force.
  4. Annexure 6 lists existing and emerging standards issued by the national standards body, including those related to AI risk management, system architecture, robustness, explainability, fairness assessment, and big data processing. These standards provide the technical foundation required for evaluating AI systems and ensuring they meet baseline expectations for safety, reliability, and transparency.

For regulatory updates and update-related services, drop a mail at inquiries@lexplosion.in.

Source: PIB

https://lexplosion.in/

Lexplosion Solutions Private Limited is a pioneering Indian Legal-Tech company that provides legal risk and compliance management solutions through cloud-based software and expert services.


Leave a Reply

Your email address will not be published. Required fields are marked *