UK Government Seeks Powers to Rewrite Online Safety Act

The United Kingdom government is facing scrutiny over efforts to grant ministers expansive new powers to modify the Online Safety Act, a move that experts warn could bypass the traditional democratic process. By introducing amendments tucked into unrelated legislation, the government aims to quickly adapt the law to address the rapidly evolving landscape of artificial intelligence (AI) harms.

This legislative strategy comes as the UK attempts to balance the promotion of AI-driven economic growth with the need to protect citizens from emerging digital threats. While the government argues that flexibility is essential to keep pace with technological breakthroughs, critics suggest that rewriting significant portions of a major act through secondary means undermines parliamentary oversight.

The tension centers on the UK plans to tackle AI harms and whether the current legislative framework is sufficient to handle the complexities of generative AI and automated content moderation. The Online Safety Act, originally passed in 2023, was designed to hold tech giants accountable, but officials have already signaled that the existing settlement is uneven.

The current administration’s approach is led by the Department for Science, Innovation and Technology (DSIT), which oversees both the promotion of AI opportunities and the regulation of online safety. As the government seeks to maintain its position at the forefront of science and research, the method of updating its safety laws has develop into a point of contention among legal and digital rights experts.

The Struggle Over the Online Safety Act

The Online Safety Act has been a subject of political friction since its inception. In January 2025, Technology Secretary Peter Kyle described the existing internet safety laws as “very uneven” and “unsatisfactory,” expressing frustration with the legislative landscape he inherited from the previous Conservative government via BBC.

A primary point of contention has been the treatment of “legal-but-harmful” content. Original plans to compel social media companies to remove such content—including posts promoting eating disorders—were dropped for adult users following concerns over censorship and free speech. While the law still requires companies to protect children from such material, the removal of broader protections for adults left a gap that current ministers are now eager to address.

The government’s current urgency is driven by the rise of AI, which can generate harmful content at a scale and speed previously unseen. To combat this, the UK is seeking wide-ranging powers to rewrite portions of the Act, potentially avoiding the lengthy process of introducing a standalone bill for each specific AI-related safety update.

Leadership and AI Governance in the UK

Responsibility for these initiatives now falls largely under the remit of Kanishka Narayan MP, who was appointed as the Parliamentary Under-Secretary of State (Minister for AI and Online Safety) on September 7, 2025 via GOV.UK. Narayan is tasked with overseeing several critical portfolios, including:

  • AI Security Institute: Focused on identifying and mitigating risks from frontier AI models.
  • Online Safety: Ensuring the digital environment is secure for users, particularly children.
  • Tech for Growth: Leveraging AI to supercharge economic growth and improve public services.
  • Intellectual Property Office (IPO): Managing the intersection of AI and copyright law.

Minister Narayan has been active in promoting the UK’s AI ambitions, delivering a speech at the Founders Forum on February 12, 2026, and representing the UK at the AI Impact Summit in India on February 16, 2026. The government’s strategy is to position the UK as a global leader in AI breakthroughs, while simultaneously tightening the rules on how these technologies are deployed online.

The Risks of Bypassing Democratic Oversight

The core of the current controversy is not necessarily the goal of tackling AI harms, but the mechanism being used to do so. By attempting to insert amendments into unrelated bills, the government is accused of utilizing a “backdoor” approach to legislation. Experts warn that this limits the ability of Parliament to debate and scrutinize the specific impacts of these changes.

The Risks of Bypassing Democratic Oversight

The implications of this approach are significant. If ministers are granted the power to rewrite substantial sections of the Online Safety Act without a dedicated legislative process, it could set a precedent for how other technology-related laws are handled. This raises questions about the balance of power between the executive branch and the legislature in the digital age.

the move comes amid a broader effort to safeguard AI development. On February 19, 2026, the UK announced that OpenAI and Microsoft had joined an international coalition to safeguard AI development, highlighting the government’s preference for collaborative, high-level agreements alongside its domestic regulatory efforts via GOV.UK.

Key Takeaways on UK AI Safety Legislation

  • Legislative Method: The government is seeking to amend the Online Safety Act via unrelated bills rather than a standalone AI safety bill.
  • Ministerial Control: Kanishka Narayan MP currently leads the effort to integrate AI security and online safety.
  • Core Conflict: The tension between the need for “agile” regulation of fast-moving AI and the requirement for democratic transparency.
  • Historical Context: The Online Safety Act has been criticized by both current and former ministers for being “unsatisfactory” and “uneven.”

What Happens Next?

The UK government continues to push for a regulatory environment that supports AI innovation while mitigating risks. Recent actions include the announcement on March 4, 2026, of a new lab designed to keep the UK in the “fast lane” for AI breakthroughs via GOV.UK.

As the government moves forward with its plans to tackle AI harms, the focus will remain on whether these powers are successfully checked by parliamentary scrutiny or if they create a regulatory framework dominated by ministerial discretion. The next key developments will likely emerge as the unrelated bills containing these amendments move through the legislative process.

World Today Journal encourages readers to share their perspectives on the balance between AI innovation and democratic oversight in the comments section below.

Leave a Comment