Brussels, February 21, 2026 – European data protection authorities are raising serious concerns about potential loopholes in the upcoming implementation of the European Union’s Artificial Intelligence Act (AI Act) and a parallel overhaul of the General Data Protection Regulation (GDPR), known as the “Digital Omnibus.” These concerns center on the potential erosion of fundamental rights and the challenges companies face in navigating a complex and evolving regulatory landscape. Businesses utilizing AI systems are facing increasing pressure to ensure compliance, with key deadlines looming in 2026.
The core of the debate revolves around the balance between fostering innovation in artificial intelligence and safeguarding individual privacy and data protection. Leading European data protection authorities, including the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB), have warned that proposed changes could weaken existing safeguards and create vulnerabilities in the enforcement of these crucial regulations. This comes at a critical juncture, as the EU aims to position itself as a global leader in responsible AI development and deployment.
The AI Act, which entered into force on August 1, 2024, according to the European Commission, aims to establish a harmonized legal framework for the development, placing on the market, and use of AI systems within the European Union. Whereas the core rules were established in 2024, the regulations will be fully applicable starting in August 2027, as detailed by the Bundesnetzagentur. Though, the practical application of the law, set to begin in August 2026, is now under scrutiny.
Centralized Oversight and Potential Gaps in AI Act Enforcement
A key point of contention is the planned centralization of AI oversight within a new EU AI Office in Brussels. The EDPB and EDPS argue that concentrating authority in a single location, without robust collaboration with national data protection authorities, could create significant enforcement deficits. They fear that this centralized approach may not adequately address the nuances of AI applications that involve sensitive data, such as those used in personalized advertising or human resources. The established national data protection authorities, they assert, are “indispensable” for effective oversight.
This concern stems from the understanding that AI systems operate within diverse national contexts, each with its own legal and cultural specificities. A centralized office, lacking the on-the-ground expertise of national authorities, may struggle to effectively assess and address the risks associated with AI deployments in different member states. The potential for inconsistent enforcement and a weakening of data protection standards is a significant worry for these authorities.
The “Digital Omnibus” and Concerns Over Data Protection Fundamentals
Alongside the AI Act, the proposed “Digital Omnibus” package, designed to streamline the GDPR, is also facing criticism. While the goal of reducing bureaucratic burdens is generally supported, data protection authorities are raising red flags about potential compromises to fundamental data protection principles. A key concern is proposed changes to the definition of “personal data,” which could lead to fewer types of information being protected under the GDPR.
According to a statement released on February 11th, the EDPB and EDPS argue that these changes could significantly hinder efforts to limit online tracking and profiling. A broader definition of personal data is crucial for ensuring that individuals have control over their information and that companies are held accountable for how they collect, use, and share it. Weakening this definition could have far-reaching consequences for online privacy and data security.
Implications for Businesses: Compliance Deadlines and Increased Scrutiny
These developments have significant implications for businesses operating within the EU. The regulations of the AI Act will generally apply directly from August 2, 2026, as outlined by the German Federal Ministry for Digital and Transport. Companies deploying high-risk AI systems will face stricter requirements, including comprehensive documentation, risk management protocols, and transparent communication with users.
Specifically, businesses will necessitate to clearly inform users about how AI systems process their data, the underlying logic behind those processes, and the rights they have regarding their data. Vague or misleading disclosures will no longer be acceptable. The Data Act, which came into effect in September 2025, introduces new obligations related to data sharing, requiring companies to facilitate access to data for certain purposes. The EU Commission has published draft contract clauses to aid in compliance, but these do not absolve organizations of the need for individual assessment.
The increasing scrutiny from regulatory bodies means that simply having formal documentation in place is no longer sufficient. Authorities are increasingly verifying whether described processes are actually being followed in practice. Companies must demonstrate a commitment to data protection and AI ethics through concrete actions and demonstrable compliance measures.
A Fundamental Conflict: Innovation vs. Fundamental Rights
The current debate highlights a fundamental tension between the desire to promote innovation in AI and the need to protect fundamental rights. The year 2026 marks a turning point, shifting the focus from legislative development to rigorous implementation. The question facing European policymakers is whether to prioritize economic growth and technological advancement at the expense of individual privacy and data protection, or to uphold a strong commitment to fundamental rights even if it means slowing down the pace of innovation.
For businesses, inaction is not an option. The warnings from data protection authorities signal that compliance with AI and data protection regulations will be a key area of focus for enforcement efforts. Organizations should proactively analyze their data flows, revise their privacy policies, and implement robust data governance frameworks. The EDSA has announced that GDPR compliance and cross-border cooperation will be central to its function program for 2026-2027.
Companies that prioritize transparency and ethical data practices will not only minimize legal risks but also build trust with their customers. A clear and honest privacy policy is becoming a crucial indicator of corporate responsibility and a competitive advantage in the increasingly data-driven economy.
The next key development to watch will be the finalization of the implementing guidelines for the AI Act and the Digital Omnibus, expected in the coming months. These guidelines will provide further clarity on the specific requirements for businesses and the enforcement mechanisms that will be used. Staying informed about these developments and proactively adapting to the changing regulatory landscape will be essential for organizations seeking to navigate the complexities of AI and data protection in the European Union.
Key Takeaways:
- The EU’s AI Act and the Digital Omnibus are facing scrutiny from data protection authorities.
- Concerns center on potential loopholes that could weaken data protection and erode fundamental rights.
- Businesses must prepare for stricter compliance requirements starting in August 2026.
- Prioritizing transparency and ethical data practices is crucial for building trust and minimizing legal risks.
Do you have questions about how these changes will impact your business? Share your thoughts and concerns in the comments below. Don’t forget to share this article with your network to raise awareness about these important developments.