
Dati Bendo / European Union, 2025 / EC - Audiovisual Service via Wikimedia Commons
Europe Realizes That It Is Overregulating AI

Axel Spies
German attorney-at-law (Rechtsanwalt)
Dr. Axel Spies is a German attorney (Rechtsanwalt) in Washington, DC, and co-publisher of the German journals Multi-Media-Recht (MMR) and Zeitschrift für Datenschutz (ZD).
The recent AI summit in Paris (February 10 to 11) was deemed a success by many participants, although U.S. Vice President JD Vance strongly criticized the EU’s “excessive regulation” in his speech, specifically referring to the Digital Services Act and the General Data Protection Regulation (GDPR) of the EU.
The EU’s flagship AI initiative, the AI Act, was less in focus. It is a comprehensive but cumbersome regulatory framework. When it was adopted, the EU celebrated it as a major regulatory achievement that would benefit the whole world and keep AI in check. However, the AI Act, spanning over more than 100 pages of complex legal text, is not the only EU regulation governing artificial intelligence—other laws such as the GDPR also apply. Certain provisions of the AI Act on certain prohibited AI and “AI literacy” obligations for the workforce etc. already took effect, but other provisions will only be fully applicable by August 2, 2026—in some cases even later in 2027.
While the intent of the EU AI Act is to ensure safety and ethical integrity, especially by imposing stringent requirements on high-risk AI systems, critics argue that the complexity of the regulation may hinder innovation and slow AI development in Europe. For startups, SMEs, and even larger companies, the bureaucratic burden of complying with the AI Act, along with the GDPR, EU copyright, and product liability laws, may be overwhelming. In many EU member states it is still not clear who will oversee the implementation the new AI rules.
The EU has long prided itself on setting global regulatory standards—a phenomenon known as the “Brussels Effect.” This concept suggests that EU regulations influence global markets due to the size and importance of the European economy, particularly in data privacy and consumer protection. However, critics argue that the Brussels Effect is merely a pretext for top-down EU measures that stifle innovation. Many jurisdictions do not follow the EU, as the privacy regulation in the United States demonstrates. Countries may deem some of the AI Act’s elements useful and other elements not.
If the Brussels Effect backfires, discouraging AI investment in Europe, the continent may fall behind in the global AI race.
The EU AI Act was initially expected to generate a Brussels Effect. However, in the case of AI, the Brussels Effect is now increasingly uncertain. Some say that the efforts will backfire on the EU. While the EU aspires to lead the world in ethical AI, its heavy compliance burdens could have unintended consequences, pushing AI development to less restrictive countries such as the United States and China, where regulations favor innovation over precaution.
Can the EU catch up? There are new efforts: At the AI summit in Paris, European Commission President Ursula von der Leyen lauded the launch of InvestAI, an initiative to mobilize €200 billion (approximately $208 billion) for AI investments in the EU, including a new European fund worth €20 billion. Von der Leyen emphasized that this project would “supercharge” the development AI in the EU and stated that by mobilizing unprecedented capital through the InvestAI initiative, Europe should become an “AI continent.” In practice, however, the reality falls short of this ambition. The EU plans to raise the money mainly through (hitherto uncertain) public-private partnerships. Additionally, French President Emmanuel Macron announced that a $112.6 billion AI investment in France, sourced from the United Arab Emirates, U.S. and Canadian investment funds, as well as French companies. This funding is also far from certain. For comparison, in the United States, Apple alone has announced a $500 billion investment, mainly focusing in AI servers.
Amid growing concerns about overregulation, the European Commission announced on February 19, 2025, that its laws and regulations in the digital sector will undergo a “fitness check,” which will likely include the practical application of the AI Act. This review so far does not aim for a major overhaul of the AI Act but seeks to simplify certain aspects for SMEs. The EU has a history of maintaining strict regulatory frameworks despite industry pushback, as seen with the Green Deal, which faced criticism from businesses and even governments. There is little indication that the AI Act will be treated any differently. There is concern that the fitness check may be more of a political maneuver to counter criticism from the United States about EU regulation and fines imposed on U.S. tech companies rather than a genuine effort to ease the regulatory burden.
Despite these concerns, some signs suggest the European Commission is reconsidering aspects of its AI strategy. The AI Liability Directive, which aimed to assign blame and financial liability when AI systems cause harm, is likely to be abandoned. Additionally, the new EU Commissioner for Tech Sovereignty, Security, and Democracy Henna Virkkunen stated that the EU AI Code of Practices will focus on supporting AI companies rather than restricting them. This could indicate a more lenient approach to AI regulation, potentially lowering the compliance burden in Europe.
While the EU’s commitment to ethical AI development is commendable and shared by many observers outside the EU, its regulatory approach risks hindering innovation. If the Brussels Effect backfires, discouraging AI investment in Europe, the continent may fall behind in the global AI race. No one in Europe wants that, but the stark contrast between AI investments in the United States and the EU underscores the need for a regulatory shift. To remain competitive, the EU must ensure that its AI regulations are both ethical and practical. Simplifying the AI Act’s implementation with regulators focusing on jointly reached, practical solutions instead of imposing huge fines and reassessing the GDPR’s impact on AI systems—particularly regarding AI training—could help the EU maintain its relevance in the rapidly evolving AI landscape. Otherwise, Europe may regulate itself out of the AI race, while innovation thrives elsewhere.