When “Real” Can Be Manufactured: Why the 2026 Digital Media Ethics Rules Matter

A woman is standing talking, but behind her, AI robot is doing the same thing, reflecting that the person infront is actually an AI. The image Background would represent techno

Advances in artificial intelligence (AI) technology have had a big impact on how online content can be created. Until recent times, if a video appeared to be legitimate or a recording of voices seemed familiar, the users tended to assume that the content was real. There was little need to think that the content could, in fact, be artificially created. This was, of course, a perfectly rational approach to online content for much of the history of the internet. Creating imitations of a person’s voice, appearance or even their mannerisms was, until quite recently, beyond the capabilities of all but a few with access to specialized technology.

However, in recent years, this assumption has not been as reliable as it once was. What once required access to professional technology can now, in fact, be achieved with a variety of software tools that are easily accessible.

As a result, content which mimics actual people or occurrences has the potential to spread across the internet with remarkable believability. It will not be easy for the viewer who is seeing this content for the first time to differentiate between authentic media content and synthetically created content. This increasing complexity has also led to a greater level of scrutiny from the authorities over the potential of these technologies. It is against this broader technological background that the Ministry of Electronics and Information Technology’s (“MeitY”) has issued the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026[1] (“Amendment”). While the Amendment does not alter the existing intermediary regulations, it does signal a discernible shift in attention. There appears to be a rising consensus that synthetic media or AI media represents a new area of concern, as opposed to being merely one more form of misleading content online.

For online platforms, the importance of this amendment may not lie so much with the language of the amendment itself, but with what may follow from that language. Once synthetic media is recognised as a new area of concern, online platforms may be expected to engage with what this means for identifying, labelling and managing such content.

Recognising synthetic media as a regulatory concern

One of the notable aspects of the Amendment is the mention of “synthetically generated information.” Traditionally, media that has been manipulated, such as deepfakes and AI-generated voice recordings, have typically fallen within the broader category of misinformation. Although this has addressed the general problem of misinformation, it has not specifically captured the essence of media that has the ability to convincingly replicate a human’s speech, image and movements.

If such media is shared without any form of disclosure, it may lead people to believe that the real person has made certain claims and has been involved in certain situations that never took place.

In the case of intermediaries that deal with moderating content created by users, it is possible that there might be a challenge in differentiating between digital editing and synthetic media. This might require a change in moderation tools or processes in order to effectively deal with synthetic media.

The impact of a three-hour takedown timeline

Another widely discussed aspect of the Amendment is the reduced time frame for compliance with the direction to remove certain information. Under the previous set of Rules, intermediaries were generally required to comply within 36 hours of the receipt of a valid order for the removal of information. The amended Rules have reduced the time frame considerably by requiring compliance within only 3 hours.

While the change in the law appears simple, the implications of the change could be significant. The reduced time frame for compliance means there is less time for internal processing.

For larger intermediaries, there are often formal procedures for dealing with takedown notices. These procedures often involve multiple teams within the organization. There could be a team dealing with the moderation of the information, another team checking whether the information satisfies the legal requirements and a legal team for more complex issues.

In situations where such processes have to be carried out within a period of 3 hours, the platforms may be required to reassess the structure in which the tasks are organized. Some of the questions that may become more pertinent in such situations may include the escalation procedures, the authority levels and the ability to deal with issues during the after-hours period.

The platforms may be required to establish the authority levels in urgent removals, the prioritization of new directives and the ability to deal with new directives during the after-hours period.

Safe harbour remains, but scrutiny may increase

It should be noted, however, that the Amendment does not eliminate the safe harbour protections available to intermediaries under the broader legal regime applicable to online platforms. Intermediaries will be able to continue to rely on the protections of the regime if they comply with the due diligence requirements set out in the applicable rules.

Yet the fluid nature of the current regulatory regime implies that the response of the platform to a direction from a relevant authority will be subject to closer scrutiny. This may involve not just a showing of the removal of the content within the relevant period, but also a showing of the procedures followed internally in evaluating the direction.

Transparency and labelling of AI-generated content

The Amendment also reflects a broader international discussion around the topic of transparency with regard to content generated with the aid of AI. There are a number of possible approaches that the platforms could take with regard to the topic of transparency. One possible approach is the use of disclosures provided by users with regard to the use of AI to generate or modify the content. Another possible approach is the use of technical indicators, such as metadata flags and visible labels, to inform users that the media has been generated or significantly modified with the aid of AI.

The process of implementing such transparency measures could involve collaboration between legal, product and engineering teams within an organisation. Changes to the design of the platform, as well as the content and metadata systems, could all be part of the broader process of providing greater transparency regarding the content that is accessible by the users.

Looking ahead

In this sense, the Amendment of 2026 can be seen as the latest step in the overall journey of regulation of online content. Instead of attempting to tackle the much broader issue of AI in general, the Rules tackle a specific but important aspect: the potential for synthesized media to be mistaken as actual events or statements.

For businesses that provide a digital environment, the immediate concern may be to ensure that their mechanisms to govern and regulate the environment are capable of dealing with such potential issues effectively. Indeed, as technology in this area continues to develop, the regulatory approach also needs to evolve at a similar pace.

To help organizations meet these aggressive timelines without compromising on due diligence, Komrisk, our comprehensive compliance management software, provides the real-time oversight necessary to manage these evolving mandates. By digitizing the escalation process and ensuring every directive is tracked from receipt to takedown, Komrisk allows businesses to maintain their safe harbor protections with confidence, even in an era of manufactured reality.


[1] https://www.meity.gov.in/static/uploads/2026/02/f55fe52418b03f58b0669f6a8bc03b6d.pdf

It amends the  https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf

Author: Dwaipayan Das

Co-Author: Amiya Mukherjee

Disclaimer

This content is intended for informational purposes only and does not constitute a legal opinion. Readers are encouraged to seek legal counsel prior to acting upon any of the information provided herein. Despite our efforts to maintain accuracy, we do not make representations, warranties or undertakings regarding the quality, completeness or reliability of the content.  This content, including the design, text, graphics, their selection and arrangement, is Copyright 2025, Lexplosion Solutions Private Limited or its licensors. ALL RIGHTS RESERVED, and all moral rights are asserted and reserved.

For any clarifications, please reach out to us at 91-33-40618083 or inquiries@lexplosion.in. Refer to our privacy policy by clicking here.

https://lexplosion.in/

Lexplosion Solutions Private Limited is a pioneering Indian Legal-Tech company that provides legal risk and compliance management solutions through cloud-based software and expert services.


Request for Demo