The global AI regulatory landscape has consolidated, since 2023, around a set of recognisable regulatory philosophies. Each of the major non-EU jurisdictions has developed a distinct approach to governing artificial intelligence — shaped by constitutional traditions, industrial policy objectives, institutional capabilities, and political economy. This comparative analysis examines five of these approaches and their implications for organisations operating across borders.

The EU AI Act — the reference point against which all other frameworks are increasingly measured — is not the subject of this analysis. Its structure is addressed in detail elsewhere in the Euridium Intelligence blog. The focus here is on the five jurisdictions that have developed the most substantive non-EU frameworks: the United States, the United Kingdom, China, Japan, and Brazil.

United States — Sectoral Pragmatism

The United States has not enacted a comprehensive federal AI law, and there is no immediate prospect of one given the current legislative environment. AI governance in the US is instead conducted through a combination of executive action, sector-specific regulation, and state-level legislation.

Executive Order 14110, signed in October 2023, established a framework for AI safety and security requirements applicable to AI systems posing risks to national security, critical infrastructure, or public health. The Order directed federal agencies to develop sector-specific guidance and imposed reporting requirements on developers of the most powerful AI models. However, executive orders are inherently limited in scope — they bind federal agencies and federal contractors, but do not constitute general-purpose legislation applicable to private actors.

Sector-specific enforcement is conducted by the Federal Trade Commission, which has used its authority under Section 5 of the FTC Act to pursue AI-related deceptive practices, algorithmic discrimination, and misleading capability claims. The FTC's enforcement agenda under the current administration has been aggressive on AI — particularly regarding automated decision-making in employment, consumer credit, and healthcare. The SEC has issued guidance on AI disclosure for publicly listed companies. The FDA continues to develop its framework for AI-enabled medical devices.

"The United States' regulatory approach reflects a deliberate choice: sector-specific enforcement through existing authorities, with Congress largely absent. Whether this constitutes a coherent strategy or an institutional gap depends substantially on one's priors."

At the state level, California, Colorado, Texas, and Illinois have enacted AI-specific legislation addressing algorithmic discrimination, automated employment decisions, and consumer protection. The resulting patchwork creates compliance complexity for organisations operating nationally. California's AI legislation — building on the CCPA/CPRA infrastructure — is the most consequential given the state's market size and its tendency to set de facto national standards.

EAGI score: 78.0 — reflecting high enforcement pressure through existing authorities, significant compliance complexity from state-level fragmentation, and mature institutional infrastructure, offset by the absence of a comprehensive binding framework.

United Kingdom — Pro-Innovation Sectoralism

The United Kingdom's approach to AI regulation is explicitly positioned as an alternative to the EU's comprehensive binding framework. Following Brexit, the UK government has pursued a pro-innovation, principles-based, sector-led model that delegates AI oversight to existing sector regulators — the ICO for data protection, the FCA for financial services, the CMA for competition, the Medicines and Healthcare Products Regulatory Agency for health — rather than creating a new overarching AI regulator.

The UK AI Safety Institute, established in 2023 and rebranded as the AI Security Institute in 2024, focuses on frontier AI evaluation and international coordination rather than domestic regulation. It has conducted evaluations of major foundation models and published technical guidance on AI safety, but does not hold enforcement powers.

The UK's approach has attracted criticism from those who argue that principles-based sectoralism leaves significant regulatory gaps — particularly for AI applications that do not fall neatly within existing sectoral boundaries. It has attracted support from those who argue that the EU's prescriptive approach risks regulatory rigidity in a rapidly evolving technological landscape.

EAGI score: 72.4 — reflecting a moderately active regulatory environment, significant ICO engagement on AI and data protection, and a developing enforcement architecture, but a lower score than the EU reflecting the deliberate choice not to create a comprehensive binding framework.

China — State-Directed Layered Control

China's AI regulatory framework is characterised by service-specific regulation, mandatory security assessments, and content-focused controls administered primarily by the Cyberspace Administration of China. Unlike the EU's risk-based approach organised around AI system categories, China's framework is organised around specific AI service types and their potential social and political effects.

The Measures for the Management of Generative AI Services, which entered force in August 2023, require providers of generative AI services to Chinese users to conduct security assessments, register their services with the CAC, and ensure that AI-generated content complies with Chinese law — including requirements that content does not subvert state power, endanger national security, or spread false information. These requirements apply to any organisation offering generative AI services accessible to users in China, regardless of where the provider is located.

The Deep Synthesis Provisions, the Recommendation Algorithm Measures, and the cross-border data transfer rules under the Personal Information Protection Law collectively create a compliance framework that is simultaneously demanding and opaque. Enforcement is conducted administratively and is not always publicly documented, making systematic monitoring difficult.

EAGI score: 79.0 — likely understating actual regulatory pressure given data accessibility limitations discussed in our methodology analysis.

Japan — Adaptive Governance

Japan has adopted what it describes as an agile governance approach to AI regulation — a framework characterised by voluntary guidelines, iterative revision, and close engagement between government, industry, and civil society, without binding legislation as the primary regulatory instrument.

The Ministry of Economy, Trade and Industry published comprehensive AI governance guidelines in 2024, structured around ten principles including human-centricity, safety, fairness, privacy protection, and innovation. These guidelines are voluntary but carry significant weight in the Japanese business context, where conformity with government guidance is treated as a professional and reputational obligation even in the absence of legal compulsion.

Japan established an AI Safety Institute in 2024, modelled in part on the UK's approach, focused on safety evaluation and international standards coordination. Japan has been an active participant in the G7 Hiroshima AI Process, which produced the International Code of Conduct for Advanced AI Systems — a non-binding but politically significant framework for frontier AI governance.

The Japanese approach reflects a deliberate policy choice to maintain flexibility in a rapidly evolving technological environment, and to leverage Japan's existing strengths in technology standards and international cooperation rather than creating a novel binding regulatory architecture. Critics argue that this approach risks leaving significant gaps in consumer protection and liability frameworks as AI deployment accelerates.

EAGI score: 39.0 — reflecting the voluntary nature of Japanese AI governance, lower published enforcement activity, and a regulatory philosophy explicitly oriented toward flexibility and industry co-regulation.

Brazil — Emerging Framework

Brazil is at an earlier stage of AI regulatory development than the other jurisdictions examined here, but its regulatory trajectory is significant. The country has an operational data protection framework in the LGPD (Lei Geral de Proteção de Dados), enforced by the ANPD (Autoridade Nacional de Proteção de Dados), which provides the institutional foundation for AI-related data governance. The ANPD has issued guidance on automated decision-making that draws directly on the LGPD's Article 20 provisions — the closest Brazilian analogue to the GDPR's Article 22 on automated decisions.

Brazil's AI Bill — PL 2338/2023 — is advancing through the Brazilian Senate. The Bill, which draws extensively on the EU AI Act's risk-based architecture, would establish a comprehensive framework for AI governance in Brazil, including high-risk system categories, conformity requirements, and institutional supervision. Its adoption would represent a significant step toward a binding AI framework in one of Latin America's largest economies.

The strategic implication for organisations considering the Brazilian market is significant: entering now, before PL 2338/2023 is enacted, allows organisations to establish market presence under the current lighter-touch framework while preparing for the more demanding compliance environment that the Bill would introduce.

EAGI score: 53.5 — reflecting operational LGPD enforcement, active regulatory development, and a developing institutional framework, with room for significant upward movement if PL 2338/2023 is enacted in substantially its current form.

Comparative Observations

Several structural patterns emerge from this comparative analysis. First, the EU model of comprehensive binding regulation with risk-based categorisation is increasingly the reference point against which other frameworks are developed — whether explicitly (Brazil's AI Bill) or implicitly (UK's deliberate positioning as an alternative). Second, the question of enforcement architecture — whether to create new AI-specific authorities or to use existing sectoral regulators — is the most consequential institutional choice each jurisdiction faces, with significant implications for regulatory consistency and expertise. Third, international coordination through the G7, G20, OECD, and bilateral mechanisms is producing convergence on principles while divergence persists on implementation — creating compliance complexity for organisations operating across multiple frameworks simultaneously.

For organisations with global AI operations, the practical implication is that a compliance programme anchored in AI Act compliance will provide a meaningful foundation for other jurisdictions — but will not substitute for jurisdiction-specific analysis, particularly for the US state-level patchwork, China's content-focused controls, and the evolving frameworks of Brazil and India.