German Chancellor Merz Pushes to Ease EU AI Regulations for Industrial Productivity

German Chancellor Friedrich Merz has signaled his intention to advocate for reducing regulatory constraints on artificial intelligence within the European Union, with a particular focus on easing rules for industrial AI applications to enhance productivity and competitiveness. Speaking during a public address on Sunday, Merz emphasized that current EU AI regulations may be overly burdensome for manufacturing and engineering sectors, where AI-driven automation could significantly boost output if given more operational flexibility.

His remarks come amid ongoing debates across EU member states about the implementation of the AI Act, the world’s first comprehensive legal framework governing artificial intelligence, which was formally adopted in 2024 and began phased enforcement in early 2025. While the legislation aims to ensure safety, transparency and accountability in AI systems, critics—including industry leaders and several national governments—have argued that its risk-based classification system imposes disproportionate compliance costs on low-risk, high-value industrial uses such as predictive maintenance, quality control, and supply chain optimization.

Merz did not specify exact legislative changes he would pursue but indicated support for creating exemptions or simplified compliance pathways for AI systems deployed in controlled industrial environments, where human oversight remains robust and potential societal risks are comparatively low. He framed the proposal as a necessary step to prevent Europe from falling behind global competitors in the United States and China, where regulatory approaches to AI have been perceived as more innovation-friendly.

The Chancellor’s stance reflects growing concern among German industrialists that rigid AI governance could hinder the country’s Industrie 4.0 initiative, a national strategy aimed at integrating digital technologies into manufacturing. According to the German Mechanical Engineering Industry Association (VDMA), over 60% of its member companies are already piloting or deploying AI in production processes, but many cite regulatory uncertainty as a barrier to wider adoption.

While Merz’s comments have been welcomed by business groups seeking greater agility in adopting AI tools, they have also drawn caution from digital rights advocates and consumer protection organizations, who warn that weakening safeguards—even in industrial contexts—could set a precedent for broader deregulation that might compromise safety or ethical standards. The European Digital Rights group (EDRi) has urged policymakers to maintain a risk-proportionate approach, insisting that any adjustments must be grounded in technical assessment rather than political pressure.

As of now, no formal legislative proposal has been introduced by the German government to amend the EU AI Act. The regulation remains under the joint authority of the European Commission, Parliament, and Council, meaning any changes would require supranational consensus. Still, Germany’s position as the EU’s largest economy and a key influencer in tech policy gives its chancellorship significant weight in shaping future debates.

Understanding the EU AI Act and Its Industrial Implications

The European Union’s Artificial Intelligence Act, which officially entered into force in August 2024, establishes a tiered regulatory model based on the perceived risk of AI applications. Systems are categorized into four risk levels: unacceptable, high, limited, and minimal. Unacceptable-risk AI—such as social scoring or real-time facial recognition in public spaces—is banned outright. High-risk AI, including systems used in critical infrastructure, education, employment, and certain industrial automation, faces strict requirements for data governance, transparency, human oversight, and conformity assessment before deployment.

For industrial AI, many applications fall into the high-risk category due to their integration with machinery that could pose physical safety hazards if malfunctioning. Examples include AI-guided robotic arms in assembly lines, predictive systems for chemical plant operations, and autonomous logistics vehicles in warehouses. Under the current framework, developers of such systems must undergo third-party audits, maintain detailed technical documentation, and implement post-market monitoring—processes that can extend timelines and increase costs.

Merz’s suggestion to exempt or simplify rules for industrial AI hinges on the argument that these environments often feature controlled access, professional supervision, and well-defined operational parameters, reducing the likelihood of harm to the general public. Proponents contend that applying the same rigor used for consumer-facing or biometric AI to factory-floor automation is disproportionate and stifles innovation in sectors where Europe traditionally excels.

However, legal experts caution that creating carve-outs risks fragmenting the single market and could lead to regulatory arbitrage, where companies seek to classify systems as “industrial” to avoid scrutiny, even if their functions resemble those in higher-risk domains. The European Commission has maintained that the AI Act’s flexibility already allows for sector-specific guidance through harmonized standards, which are being developed by European standardization organizations CEN and CENELEC in support of the regulation.

Reactions from Industry and Civil Society

The call for regulatory relief has found strong backing among Germany’s industrial base. The Federation of German Industries (BDI) welcomed Merz’s comments, stating in a recent position paper that “a balanced approach to AI regulation is essential to preserve Europe’s technological sovereignty.” The group pointed to survey data showing that nearly half of German manufacturers view compliance complexity as a moderate to severe obstacle to scaling AI initiatives, particularly minor and medium-sized enterprises lacking dedicated legal or compliance teams.

Similarly, the German Association for Information Technology, Telecommunications and New Media (BITKOM) has urged the EU to introduce “innovation-friendly provisions” that allow for real-world testing of AI systems under supervised conditions, akin to regulatory sandboxes used in financial technology. BITKOM’s 2024 report on AI adoption noted that while investment in industrial AI grew by 22% year-on-year, deployment rates lagged behind those in the U.S. And South Korea, attributing part of the gap to regulatory hesitation.

German Chancellor Friedrich Merz Visits China, Pushes Fair Trade as Tariff Tensions Rise

On the other side, advocacy groups such as AlgorithmWatch and Access Now have expressed concern that prioritizing industrial efficiency should not come at the expense of worker safety or environmental accountability. They argue that even in controlled settings, AI failures can lead to workplace injuries, ecological damage, or supply chain disruptions with cascading effects. These organizations recommend strengthening enforcement mechanisms and increasing funding for market surveillance authorities rather than diluting requirements.

Labor unions, including IG Metall, have also weighed in, calling for any regulatory adjustments to be paired with stronger worker consultation rights and transparency obligations when AI systems are introduced in the workplace. They emphasize that productivity gains should not be achieved through reduced oversight or diminished employee influence over technological change.

What This Means for the Future of AI Governance in Europe

Merz’s push reflects a broader tension within the EU between upholding precautionary principles and fostering technological competitiveness—a debate that has intensified since the AI Act’s adoption. While the regulation was praised globally for setting a benchmark in responsible AI governance, its real-world impact is now being tested as member states navigate implementation challenges.

From Instagram — related to European, German

Any move to alter the framework would require agreement among all 27 EU member states, a process known for its deliberative pace. The European Commission has indicated it will review the first wave of implementation experiences by mid-2026, with potential updates to guidance or delegated acts possible thereafter. However, amendments to the core regulation itself would likely require a formal legislative initiative, a process that could span years.

In the interim, national governments like Germany’s may seek to influence outcomes through their roles in the Council of the European Union or by advocating for flexible interpretation during standardization processes. The outcome could shape not only how AI is regulated in factories but also how the EU positions itself in the global race for technological leadership—balancing trust, safety, and the imperative to innovate.

For now, stakeholders across industry, civil society, and government continue to monitor developments closely, recognizing that the decisions made in the coming months will aid define the trajectory of AI in Europe for years to come.

As this discussion evolves, readers are encouraged to follow official updates from the European Commission’s digital policy department and the German Federal Ministry for Economic Affairs and Climate Action for the latest information on AI regulation and industrial policy.

Share your thoughts on how Europe should balance innovation and regulation in the AI era—join the conversation in the comments below or share this article to keep the dialogue going.

Leave a Comment