Artificial Intelligence Governance: A Race Europe Is Already Losing
Across Europe, a clear pattern has emerged: no government appears truly capable—or willing—to govern artificial intelligence effectively. The reality is now difficult to deny. AI is advancing far more rapidly than European lawmakers, and their national counterparts, can regulate it. At the same time, global AI governance is fragmenting in real time, exposing the limits of both national and supranational institutions.
Artificial intelligence is no longer a peripheral innovation confined to laboratories or start-ups. It is becoming a tool of decision-making, education, infrastructure optimization, and even military targeting. Yet the legal frameworks meant to regulate or contain these developments are either nonexistent, blocked in parliamentary committees, or weakened by criticism and delay.
A Regulatory Vacuum in the United States
In the United States, Washington has spent the last thirty-six months issuing executive orders, policy frameworks, and non-binding guidelines on artificial intelligence. However, none of these initiatives has been transformed into federal law. The result is an increasingly unstable regulatory landscape in which the federal government signals concern without establishing binding rules.
Federal authorities have attempted to impose a degree of national harmonization in AI regulation, largely to avoid a fragmented legal environment that could undermine American competitiveness, particularly in relation to China. This effort focused on issues such as security, energy consumption, and intellectual property. Yet it quickly encountered strong resistance from the states.
Several states have continued to pursue their own agendas. California has moved to reinforce safety standards and data protection obligations. New York has introduced requirements for reporting AI-related incidents. Elsewhere, state legislatures are multiplying sector-specific proposals aimed at filling the federal vacuum. Despite attempts by the federal executive to discourage or obstruct some of these initiatives—including direct pressure on certain states—the decentralized dynamic persists.
This resistance reveals more than institutional complexity. It reflects deeper political divisions over how AI should be governed, including fractures within the Republican Party itself. In practice, the United States is witnessing a struggle between federal coordination and state-level experimentation, without any clear resolution in sight.
Voluntary Commitments Without Real Constraint
At the same time, the federal administration has sought to shape the sector indirectly by obtaining voluntary commitments from major technology firms, particularly regarding the energy costs associated with data centers. But such measures remain weak by design. Without enforcement mechanisms, they function more as signals than as regulatory instruments.
These soft commitments have not prevented individual states from adopting their own fiscal and energy rules. In other words, while Washington tries to avoid fragmentation, it has so far failed to prevent it. The American model is therefore defined not by coherent governance, but by competing layers of authority and an absence of binding consensus.
Europe’s Ambition Is Beginning to Erode
Europe initially appeared to move in the opposite direction. With the adoption of the AI Act, the European Union positioned itself as the world’s most ambitious regulatory power in the field of artificial intelligence. The project was presented as a landmark effort to establish clear rules, classify risks, and impose obligations on developers and deployers of AI systems.
Yet that ambition is now being adjusted downward. Compliance deadlines are being delayed, implementation is being softened, and certain rules—particularly those linked to data protection—are being relaxed under economic and diplomatic pressure. What was once promoted as a decisive framework is gradually being diluted before many of its core provisions have even fully entered into force.
Officially, this recalibration is justified in the name of competitiveness. European policymakers argue that excessive regulatory burdens could penalize domestic firms and leave them vulnerable in a global race dominated by the United States and China. But critics increasingly see this shift as a retreat from Europe’s original promise: to become the first major power capable of imposing a credible and democratic framework on AI development.
That said, the European Union has not entirely abandoned its regulatory instincts. It continues to maintain a hard line on certain uses considered particularly harmful, including the prohibition of systems that generate non-consensual intimate content. On these issues, Brussels still attempts to preserve a normative distinction between legitimate innovation and unacceptable abuse.
A Structural Political Dilemma
What is unfolding on both sides of the Atlantic reflects the same structural problem. Technological capabilities are progressing faster than legal systems can absorb them. AI evolves on an exponential curve; legislation does not. Parliaments deliberate, committees stall, lobbying intensifies, and implementation slips. Meanwhile, the technology keeps moving.
Governments are therefore trapped between two opposing risks. On one side lies the fear of overregulation: imposing rules so rigid that they stifle innovation, deter investment, and weaken strategic competitiveness. On the other lies the danger of underregulation: allowing powerful systems to spread across society without meaningful oversight, despite clear risks to privacy, security, labor, democracy, and public trust.
This is no longer an abstract policy debate. It is a live governance crisis. AI is already entering schools, public administrations, workplaces, media ecosystems, logistics chains, and defense structures. In each of these spaces, it is reshaping decisions and redistributing power, often in the absence of robust democratic control.
Political Power Is Falling Behind Technological Power
The broader lesson is increasingly clear: public authorities are not setting the pace. They are reacting to it. Despite rising public demand for stronger regulation, oversight remains partial, contested, and slow. The political system continues to debate frameworks that are already being overtaken by the next generation of models and applications.
In that sense, the real issue is not simply whether Europe—or America—will regulate AI effectively. It is whether liberal democracies still possess the institutional speed, coherence, and authority required to govern transformative technologies at all.
For now, the answer appears uncertain. What is certain, however, is that AI is not slowing down. It is accelerating into daily life, into markets, into strategic competition, and into the machinery of state power itself. The gap between technological capability and political control is no longer theoretical. It is operational, widening, and increasingly difficult to close.