Three Rulebooks.
Zero Consensus.
The EU AI Act is law. The US has 1,000+ state bills and no federal framework. China regulates outputs, not inputs. Every company building with AI must navigate all three — simultaneously.
00 — The regulatory map
Three superpowers, three philosophies, zero interoperability. Here's where the world's major AI jurisdictions stand right now.
World's first comprehensive AI law. Risk-tiered framework. Already in force. Fines can exceed €35M. Non-EU companies that deploy AI affecting EU citizens must comply.EUAIA
Live — Penalties since Aug 2025No federal AI law. 1,000+ bills introduced across states. Trump EO Dec 2025 attempts to preempt state laws. Legal battles ongoing. FTC, FDA & EEOC applying existing statutes.NCSL
Fragmented — Federal vacuumStrict rules on what AI can say and generate. Permissive on training data acquisition. State-aligned deployment. Sector-specific regulations move faster than in the West.
Controlled — Output-first modelThe result is a tripartite regulatory landscape with no interoperability. A company headquartered in San Francisco, using a model trained on EU citizen data, deployed to users in China, is technically subject to all three frameworks — and they contradict each other on multiple points.
If you're building anything with AI, you're already subject to regulation — whether or not your country has passed a specific AI law. The EU's extraterritorial reach means any product touching EU citizens is in scope.
This isn't a future problem. It's a compliance reality right now, and the three frameworks actively contradict each other. Waiting for clarity is itself a risk.
01 — The EU AI Act: already law
The EU AI Act is the world's first comprehensive legal framework for AI. It follows a risk-based approach: the higher the risk, the stricter the requirements. It entered into force on 1 August 2024, with provisions applying in phased waves through 2027.EUAIA
Fine structure
The EU didn't just write rules — they attached teeth. A €700M potential fine for a large company isn't theoretical; the enforcement infrastructure is live and penalties are being collected right now.
If you use any AI that touches EU citizens — even from outside Europe — you're in scope. The clock is already running, and the next major enforcement wave hits August 2026.
02 — The US: a patchwork at war with itself
The United States has no federal AI law. Instead, over 1,000 AI-related bills have been introduced across states in 2024–2025, with 38 states adopting more than 100 AI laws in the first half of 2025 alone. Meanwhile, the Trump administration has moved aggressively to block state-level regulation — triggering a constitutional standoff.NCSL
"We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don't, then China will easily catch us in the AI race."
— President Donald Trump, December 2025WHLarge incumbents — OpenAI, Andreessen Horowitz, and other Silicon Valley players — have publicly backed federal preemption and formed AI super PACs to support pro-deregulation candidates. Critics argue this creates a "regulatory moat": complex federal compliance standards are easier for Big Tech to absorb than for smaller challengers. The companies best positioned to shape the rules stand to benefit most from them.
The US isn't ungoverned — it's fractured. You're simultaneously subject to federal agency enforcement, state-specific laws, and an active legal battle over who gets to regulate what. Building to the strictest standard isn't paranoid; it's the only defensible strategy.
If you're a smaller company, the regulatory moat should concern you. The rules being written now will disproportionately advantage incumbents who can afford compliance infrastructure. Engaging early is a survival move.
03 — What's actually being enforced today
Regulation moves slowly. Court decisions move faster. These four cases set legal precedent that applies right now — regardless of what any legislature passes next.
Air Canada's chatbot gave a customer incorrect information about bereavement fares. When sued, Air Canada argued the chatbot was a "separate entity" and thus not its responsibility. The British Columbia Civil Resolution Tribunal rejected this entirely — ruling companies are fully liable for AI system outputs, the same as human employee actions.MOFA
A New York attorney submitted legal briefs containing AI-generated case citations that didn't exist. ChatGPT had fabricated them with complete plausibility — real-sounding case names, courts, and outcomes. The attorney was sanctioned. The case sparked an emergency response across the legal profession and prompted dozens of courts to adopt AI disclosure rules.MATA
Getty Images sued Stability AI for training its image generation model on millions of Getty photos without licence. The UK High Court ruled the case could proceed to trial. Meanwhile, similar suits proceed across multiple jurisdictions with no consistent outcome yet. The training data liability question remains the single biggest unresolved legal issue in AI.
The New York Times sued OpenAI and Microsoft for using its journalism to train GPT models. The case is proceeding to trial — one of 51+ active AI copyright lawsuits. The outcome will determine whether AI training on web-scraped copyrighted content constitutes fair use. A verdict could reshape the economics of every major AI model.
Courts are setting AI policy faster than legislatures. The Air Canada ruling alone means every company with a customer-facing AI system is already on the hook for whatever that system says — whether you intended it or not.
The training data cases (Getty, NYT v. OpenAI) are existential for the industry. If fair use doesn't hold, the cost basis of every major foundation model changes overnight. Know what's in your data pipeline before the verdicts land.
04 — The governance gap
Across every jurisdiction, the technology is moving faster than the regulation. The gap between what AI can do and what any regulator can verify is widening — creating a period of what scholars call "governance vacuum."
Lack sufficient clinical validation studies. Devices are approved based on technical function, not proven patient outcomes.BPC
Fewer than 15% were enacted. Most die in committee. The legislative churn creates uncertainty without creating protection.NCSL
No major court has yet determined who bears liability when an AI system causes serious harm — doctor, hospital, developer, or deployer?
The black box problem is not a metaphor — it is a legal emergency. If a regulator cannot understand why a high-stakes decision was made, they cannot audit it, challenge it, or hold anyone accountable for it.
— Bipartisan Policy Center, AI Governance Report, 2025BPCThe EU AI Act explicitly requires high-risk AI systems to be explainable and subject to human oversight. But "explainability" is not a solved technical problem. Many of the most powerful AI systems — including large language models — are fundamentally non-interpretable at the level regulators need. The regulation exists. The technical means to comply with it, in some cases, does not yet.
We're in a window where AI can do things no regulator can verify, no court has fully adjudicated, and no liability framework has been tested. That's not a reason to freeze — it's a reason to build your own governance before someone else imposes it.
The companies that establish internal AI oversight, documentation, and explainability practices now won't just be compliant when the rules arrive — they'll be the ones that helped shape them.
05 — What compliance actually costs
The EU AI Act imposes different burdens depending on company size and AI risk classification. The result: compliance costs fall disproportionately on mid-market companies who lack Big Tech's legal infrastructure but exceed the SME thresholds for reduced fines.GT
Companies substantially modifying existing GPAI models — through fine-tuning or retraining — become providers themselves for regulatory purposes and inherit all GPAI obligations. This has significant implications for any business customising foundation models for deployment.EUAIA
If you fine-tune a foundation model, you may have just made yourself a GPAI provider under EU law — with all the documentation, testing, and reporting obligations that come with it. Most companies don't know this yet.
The compliance burden hits mid-market companies hardest. They're too big for SME relief, too small for Big Tech's legal armies. If that's you, start building compliance infrastructure now — before the August 2026 enforcement wave.
to stay ahead of AI regulation.
Subscribe to Veltrix Collective. Stay across regulation changes as they happen. We track the EU AI Act timeline, US state bills, and enforcement actions so you don't have to. Free, every Tuesday.
Audit your AI stack using Claude or ChatGPT. Paste your tool list and ask: "Which of these would be classified as high-risk under the EU AI Act?" You'll be surprised how many touch HR, finance, or customer-facing decisions.
Check your vendor contracts for AI liability clauses. After Moffatt v. Air Canada, you can't disclaim what your AI does. Use Claude to review your terms of service and flag any AI disclaimer language that won't hold up.
Map your EU exposure. If any of your users, customers, or data subjects are EU citizens, you're in scope. Use n8n or Make to set up a monitoring workflow that flags EU AI Act enforcement updates from the European AI Office.
Start a training data audit. Before NYT v. OpenAI sets precedent, know what's in your training pipeline. Use Claude Code to scan your data ingestion scripts and flag any web-scraped content that lacks clear licensing.
06 — Don't navigate this blind
One briefing.
AI regulation is moving faster than any single person can track. We synthesise the EU AI Act timeline, US state battles, and global enforcement actions into one data-backed briefing — every Tuesday.