Three superpowers, three philosophies, zero interoperability. Here's where the world's major AI jurisdictions stand right now.

European Union
Regulate First

World's first comprehensive AI law. Risk-tiered framework. Already in force. Fines can exceed €35M. Non-EU companies that deploy AI affecting EU citizens must comply.EUAIA

Live — Penalties since Aug 2025
United States
Innovate First

No federal AI law. 1,000+ bills introduced across states. Trump EO Dec 2025 attempts to preempt state laws. Legal battles ongoing. FTC, FDA & EEOC applying existing statutes.NCSL

Fragmented — Federal vacuum
China
Control Outputs

Strict rules on what AI can say and generate. Permissive on training data acquisition. State-aligned deployment. Sector-specific regulations move faster than in the West.

Controlled — Output-first model

The result is a tripartite regulatory landscape with no interoperability. A company headquartered in San Francisco, using a model trained on EU citizen data, deployed to users in China, is technically subject to all three frameworks — and they contradict each other on multiple points.

So what does this mean?

If you're building anything with AI, you're already subject to regulation — whether or not your country has passed a specific AI law. The EU's extraterritorial reach means any product touching EU citizens is in scope.

This isn't a future problem. It's a compliance reality right now, and the three frameworks actively contradict each other. Waiting for clarity is itself a risk.

The EU AI Act is the world's first comprehensive legal framework for AI. It follows a risk-based approach: the higher the risk, the stricter the requirements. It entered into force on 1 August 2024, with provisions applying in phased waves through 2027.EUAIA

Feb 2, 2025
Prohibited Practices Banned
Social scoring, emotion recognition at work, predictive crime AI, biometric categorisation — all illegal across the EUDLA
Aug 2, 2025
Penalty Regime Live
Fines now enforceable. GPAI providers must publish transparency reports, document training data, assess systemic risksCRAN
Aug 2, 2026
Full GPAI Enforcement
GPAI penalty provisions fully operational. High-risk AI data protection impact assessments become mandatory
Aug 2, 2027
Safety-Critical AI
AI in medical devices, transport, critical infrastructure must pass full conformity assessments. AI-generated content labelling mandatory
Prohibited AI practices Social scoring, manipulative AI, illegal biometrics
€35M / 7% turnover
High-risk AI obligations Missing documentation, bias controls, human oversight
€15M / 3% turnover
Misleading authorities Incorrect or incomplete information to regulators
€7.5M / 1% turnover
EU AI Office: For SMEs and startups, fines apply the lower of the percentage or fixed threshold. A €10B-turnover company could face a €700M fine for prohibited AI practices. GPAI providers already on the EU market before August 2025 have until August 2027 to fully comply.GT
So what does this mean?

The EU didn't just write rules — they attached teeth. A €700M potential fine for a large company isn't theoretical; the enforcement infrastructure is live and penalties are being collected right now.

If you use any AI that touches EU citizens — even from outside Europe — you're in scope. The clock is already running, and the next major enforcement wave hits August 2026.

The United States has no federal AI law. Instead, over 1,000 AI-related bills have been introduced across states in 2024–2025, with 38 states adopting more than 100 AI laws in the first half of 2025 alone. Meanwhile, the Trump administration has moved aggressively to block state-level regulation — triggering a constitutional standoff.NCSL

"We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don't, then China will easily catch us in the AI race."

— President Donald Trump, December 2025WH
What the White House did
Jan 2025: Revoked Biden's AI EO, replaced with "innovation-first" framework removing regulatory barriersBENT
July 2025: Attempted 10-year moratorium on state AI laws — stripped from the "Big Beautiful Bill" after Senate voted 99–1 against
Dec 2025: EO directing DOJ to create AI Litigation Task Force to sue states with "onerous" AI laws; withhold broadband fundingSIDL
Nov 2025: Genesis Mission — federal AI research platform combining DOE supercomputers, national labs, and private sector
What the states did back
California: SB 53 signed — model for responsible AI safety law; Gov. Newsom called it a "national model"
New York: RAISE Act signed Dec 19, 2025 — companies spending $100M+ on AI training must publish safety protocols
Colorado: Anti-discrimination AI law — cited by Trump as example of "ideological bias" regulation he wants to strike down
40 state AGs signed letter in May 2025 opposing federal preemption of state AI regulation — bipartisan coalitionNPR
What existing law governs now
FTC: Deceptive AI practices illegal under FTC Act — companies cannot mislead consumers using AI regardless of sector
FDA: 1,250+ AI medical devices authorised — but 43% lack clinical validation, enforcement is inconsistentBPC
EEOC: Applying existing anti-discrimination law to AI hiring tools — class actions emerging
Take It Down Act (May 2025): One of the few enacted federal AI laws — deepfake exploitation of minors illegal nationwide
Regulatory moat

Large incumbents — OpenAI, Andreessen Horowitz, and other Silicon Valley players — have publicly backed federal preemption and formed AI super PACs to support pro-deregulation candidates. Critics argue this creates a "regulatory moat": complex federal compliance standards are easier for Big Tech to absorb than for smaller challengers. The companies best positioned to shape the rules stand to benefit most from them.

So what does this mean?

The US isn't ungoverned — it's fractured. You're simultaneously subject to federal agency enforcement, state-specific laws, and an active legal battle over who gets to regulate what. Building to the strictest standard isn't paranoid; it's the only defensible strategy.

If you're a smaller company, the regulatory moat should concern you. The rules being written now will disproportionately advantage incumbents who can afford compliance infrastructure. Engaging early is a survival move.

Regulation moves slowly. Court decisions move faster. These four cases set legal precedent that applies right now — regardless of what any legislature passes next.

Case file 001 — Canada, 2024
Moffatt v. Air Canada

Air Canada's chatbot gave a customer incorrect information about bereavement fares. When sued, Air Canada argued the chatbot was a "separate entity" and thus not its responsibility. The British Columbia Civil Resolution Tribunal rejected this entirely — ruling companies are fully liable for AI system outputs, the same as human employee actions.MOFA

Precedent: Companies cannot disclaim liability for AI actions
Case file 002 — USA, 2023
Mata v. Avianca: The Ghost Citations

A New York attorney submitted legal briefs containing AI-generated case citations that didn't exist. ChatGPT had fabricated them with complete plausibility — real-sounding case names, courts, and outcomes. The attorney was sanctioned. The case sparked an emergency response across the legal profession and prompted dozens of courts to adopt AI disclosure rules.MATA

Precedent: Hallucination is a professional liability, not a tech excuse
Case file 003 — EU, 2025
Getty Images v. Stability AI

Getty Images sued Stability AI for training its image generation model on millions of Getty photos without licence. The UK High Court ruled the case could proceed to trial. Meanwhile, similar suits proceed across multiple jurisdictions with no consistent outcome yet. The training data liability question remains the single biggest unresolved legal issue in AI.

Precedent: Training data copyright is an active litigation frontier
Case file 004 — USA, ongoing
NYT v. OpenAI

The New York Times sued OpenAI and Microsoft for using its journalism to train GPT models. The case is proceeding to trial — one of 51+ active AI copyright lawsuits. The outcome will determine whether AI training on web-scraped copyrighted content constitutes fair use. A verdict could reshape the economics of every major AI model.

Precedent: Outcome could retroactively make current AI training illegal
So what does this mean?

Courts are setting AI policy faster than legislatures. The Air Canada ruling alone means every company with a customer-facing AI system is already on the hook for whatever that system says — whether you intended it or not.

The training data cases (Getty, NYT v. OpenAI) are existential for the industry. If fair use doesn't hold, the cost basis of every major foundation model changes overnight. Know what's in your data pipeline before the verdicts land.

Across every jurisdiction, the technology is moving faster than the regulation. The gap between what AI can do and what any regulator can verify is widening — creating a period of what scholars call "governance vacuum."

43%
of FDA-authorised AI medical devices

Lack sufficient clinical validation studies. Devices are approved based on technical function, not proven patient outcomes.BPC

1,000+
State AI bills introduced in 2025

Fewer than 15% were enacted. Most die in committee. The legislative churn creates uncertainty without creating protection.NCSL

0
Major AI liability verdicts

No major court has yet determined who bears liability when an AI system causes serious harm — doctor, hospital, developer, or deployer?

The black box problem is not a metaphor — it is a legal emergency. If a regulator cannot understand why a high-stakes decision was made, they cannot audit it, challenge it, or hold anyone accountable for it.

— Bipartisan Policy Center, AI Governance Report, 2025BPC

The EU AI Act explicitly requires high-risk AI systems to be explainable and subject to human oversight. But "explainability" is not a solved technical problem. Many of the most powerful AI systems — including large language models — are fundamentally non-interpretable at the level regulators need. The regulation exists. The technical means to comply with it, in some cases, does not yet.

So what does this mean?

We're in a window where AI can do things no regulator can verify, no court has fully adjudicated, and no liability framework has been tested. That's not a reason to freeze — it's a reason to build your own governance before someone else imposes it.

The companies that establish internal AI oversight, documentation, and explainability practices now won't just be compliant when the rules arrive — they'll be the ones that helped shape them.

The EU AI Act imposes different burdens depending on company size and AI risk classification. The result: compliance costs fall disproportionately on mid-market companies who lack Big Tech's legal infrastructure but exceed the SME thresholds for reduced fines.GT

Big Tech
€1B+ annual turnover
Up to 7% global turnover
Manageable
Dedicated legal teams, existing compliance infrastructure, lobbying influence to shape the rules themselves
Mid-Market
€50M–€1B turnover
Up to 3–7% turnover
Severe
Must hire AI compliance officers, conduct risk assessments, audit third-party AI vendors, register high-risk systems — without Big Tech resources
Startups / SMEs
Under €50M turnover
Lower threshold fines
Significant
SME-adjusted fines apply the lower of fixed/percentage threshold. EU AI Act has an "AI Service Desk" as first point of contact for SME queries
GPAI Providers
LLMs, multimodal models
Up to 3% + €15M
Severe
Technical documentation, transparency reports, copyright compliance, systemic risk assessment, adversarial testing, serious incident reporting

Companies substantially modifying existing GPAI models — through fine-tuning or retraining — become providers themselves for regulatory purposes and inherit all GPAI obligations. This has significant implications for any business customising foundation models for deployment.EUAIA

So what does this mean?

If you fine-tune a foundation model, you may have just made yourself a GPAI provider under EU law — with all the documentation, testing, and reporting obligations that come with it. Most companies don't know this yet.

The compliance burden hits mid-market companies hardest. They're too big for SME relief, too small for Big Tech's legal armies. If that's you, start building compliance infrastructure now — before the August 2026 enforcement wave.

5 things you can do this week
to stay ahead of AI regulation.
1.

Subscribe to Veltrix Collective. Stay across regulation changes as they happen. We track the EU AI Act timeline, US state bills, and enforcement actions so you don't have to. Free, every Tuesday.

2.

Audit your AI stack using Claude or ChatGPT. Paste your tool list and ask: "Which of these would be classified as high-risk under the EU AI Act?" You'll be surprised how many touch HR, finance, or customer-facing decisions.

3.

Check your vendor contracts for AI liability clauses. After Moffatt v. Air Canada, you can't disclaim what your AI does. Use Claude to review your terms of service and flag any AI disclaimer language that won't hold up.

4.

Map your EU exposure. If any of your users, customers, or data subjects are EU citizens, you're in scope. Use n8n or Make to set up a monitoring workflow that flags EU AI Act enforcement updates from the European AI Office.

5.

Start a training data audit. Before NYT v. OpenAI sets precedent, know what's in your training pipeline. Use Claude Code to scan your data ingestion scripts and flag any web-scraped content that lacks clear licensing.

Three rulebooks, zero consensus, and €35M fines already enforceable. The companies that build governance now won't just survive regulation — they'll shape it.
Sources
EUAIA
EU Artificial Intelligence Act (Regulation EU 2024/1689), Official Journal, June 2024
DLA
DLA Piper, "Latest wave of EU AI Act obligations take effect," August 2025
CRAN
Cranium AI, "Navigating the EU AI Act August 2025 Deadline," August 2025
WH
White House EO, "Ensuring a National Policy Framework for Artificial Intelligence," December 11, 2025
NCSL
Built In / NCSL, "As Trump Fights AI Regulation, States Step In," 2025
BENT
Benton Institute, "Trump Executive Orders Shape Federal AI Regulation," December 2025
SIDL
Sidley Austin, "Unpacking the December 2025 Executive Order," December 23, 2025
MOFA
Moffatt v. Air Canada, BCCRT 149, February 14, 2024
MATA
Mata v. Avianca, SDNY, 2023 — AI hallucination sanctions
BPC
Bipartisan Policy Center, AI Governance Report, 2025
NPR
NPR / PBS, Trump AI preemption executive order analysis, December 2025
GT
Greenberg Traurig, "EU AI Act: Key Compliance Considerations," July 2025
Veltrix Collective
Three rulebooks.
One briefing.

AI regulation is moving faster than any single person can track. We synthesise the EU AI Act timeline, US state battles, and global enforcement actions into one data-backed briefing — every Tuesday.

Weekly, every Tuesday · No spam · Privacy policy · Unsubscribe anytime