India IT Rules 2026: Human Rights Watch Warns of Increased Online Censorship and Privacy Risks

India’s proposed amendments to its intermediary liability rules would significantly expand government control over online content, according to Human Rights Watch. The Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Second Amendment Rules, 2026, would require social media platforms to comply with executive-issued directives to maintain legal protections for hosting user-generated content.

The rules, released by India’s Ministry of Electronics and Information Technology on March 30, 2026, invite public comment until April 29, 2026. Human Rights Watch urges the government to withdraw the proposal, arguing it would undermine free expression and privacy by treating ordinary social media users like formal news publishers under a broad “Code of Ethics” framework.

“The Indian government has repeatedly amended information technology rules since 2021, with each amendment giving the authorities increasing control over online content,” said Jayshree Bajoria, associate Asia director at Human Rights Watch. “The government claims these rules have been aimed at ‘fake news’ and hate speech but instead they have been used to target dissent.”

The proposed changes build on a pattern of regulatory shifts that began with the 2021 IT Rules, which expanded intermediary guidelines to include digital news services and video streaming platforms despite lacking explicit authority in the parent IT Act. Those rules also introduced traceability measures that would compromise end-to-end encryption in messaging apps.

Subsequent amendments in 2023 sought to establish a government fact-checking unit to arbitrate truth online, though the Supreme Court stayed implementation in March 2024. Courts also blocked related compliance requirements for digital news media and a three-tier grievance redress mechanism during that period.

In October 2025, the government formalized the Home Ministry’s Sahyog portal—a centralized system allowing multiple agencies to issue takedown notices with limited transparency. The Internet Freedom Foundation described this as becoming “the primary censorship tool” due to its procedural simplicity and lack of independent oversight.

Most recently, in February 2026, the government reduced the timeline for platforms to remove content deemed “unlawful” from 36 hours to just 3 hours through another rule amendment.

Under the 2026 draft, intermediaries would lose protection under Section 79 of the IT Act unless they comply with executive directions, advisories, and standard operating procedures—not just court orders or official government notifications. This shifts compliance from a judicial process to executive discretion.

The rules would also require ordinary users commenting on current affairs to adopt publisher-like self-regulation mechanisms and subject themselves to review by an “Inter-Departmental Committee.” This body could recommend actions ranging from public apologies to content removal based on referrals from the Ministry of Information and Broadcasting.

Human Rights Watch warns that platforms facing loss of market access or legal immunity will likely over-comply, censoring legitimate expression to avoid risk. The organization cites increasing government-directed content removals targeting criticism of Prime Minister Narendra Modi and the Bharatiya Janata Party-led government.

Since February 2026, X (formerly Twitter) has notified scores of Indian users that their posts were blocked, many involving satire or opposition political content. The platform also suspended several accounts for mocking the ruling coalition or Prime Minister Modi.

Meta’s transparency reports show an exponential rise in content restrictions on Instagram and Facebook in India between January 2024 and December 2025 in response to government orders, though exact numbers were not disclosed in the reports.

The 2015 Shreya Singhal v. Union of India Supreme Court decision established procedural safeguards for content blocking, requiring written justifications, joint secretary-level or higher authority to issue orders, user notification when identifiable, and review committee oversight. Intermediaries were only required to act on takedown notices backed by court orders or official government notifications—not internal assessments.

Human Rights Watch contends that successive amendments have systematically eroded these protections, creating a framework where executive preference overrides judicial process. The group argues this enables censorship of peaceful critics, human rights documentation, and independent journalism under the guise of combating misinformation.

Companies operating in India face a dilemma: comply with expanding executive demands to retain safe harbor protections or risk liability for third-party content. With shortening response timelines and multiplying channels for takedown requests, the burden increasingly falls on platforms to preemptively filter content.

The April 29, 2026 deadline for public comments on the draft rules represents the next formal opportunity for stakeholders to respond before potential finalization. Interested parties can submit feedback through the Ministry of Electronics and Information Technology’s official consultation portal.

For updates on the rulemaking process and related digital rights developments in India, readers may refer to official gazette notifications from the Ministry of Law and Justice or filings with the Supreme Court of India regarding ongoing legal challenges to intermediary regulations.

What are your thoughts on balancing platform accountability with free expression in India’s evolving digital regulatory landscape? Share your perspective in the comments and help spread awareness by sharing this article.

Leave a Comment