How Is AI
Regulated in 2026?
The EU AI Act is now in force. The US has executive orders and a patchwork of state laws. China has its own regime. Here's what each framework actually requires — and a practical checklist for businesses deploying AI today.
00 — Why regulation is finally here
AI regulation wasn't inevitable — it became inevitable after documented harms from biased hiring tools, facial recognition wrongful arrests, and AI-generated disinformation. Governments moved from "wait and see" to "regulate now" between 2022 and 2024.
The EU moved fastest. The EU AI Act passed in March 2024 and entered full force in August 2026 — the first comprehensive binding AI regulation anywhere in the world. EUAI It takes a risk-based approach: the more consequential the AI application, the stricter the requirements. This framework has effectively become the global standard for multinational organisations, since any company selling AI products or services in the EU must comply.
The US took a different path: executive orders, voluntary commitments, and a patchwork of state laws. Biden's October 2023 Executive Order on Safe and Trustworthy AI established the AI Safety Institute and required safety testing disclosures for frontier models. WH The Trump administration modified but largely maintained these requirements in 2025.
01 — The EU AI Act: four risk tiers
The EU AI Act classifies AI applications by risk level and imposes proportionate requirements. Understanding where your AI deployment falls is the first compliance question.
AI systems that pose unacceptable risks to fundamental rights are prohibited entirely.
Examples: Social credit scoring by governments, real-time remote biometric surveillance in public spaces (with terrorism exceptions), subliminal manipulation that harms users, exploitation of vulnerable groups.
Mandatory conformity assessments, transparency, human oversight, accuracy requirements, and registration in an EU database before deployment.
Examples: AI in hiring and recruitment, credit scoring, educational assessment, law enforcement, migration decisions, healthcare diagnosis, safety-critical infrastructure. This covers a large share of enterprise AI deployments.
Must disclose to users that they're interacting with AI. Chatbots and AI-generated content require clear labelling.
Examples: Customer service chatbots, AI content generation tools, deepfake image generation. Most consumer-facing AI applications fall here.
No binding requirements. Voluntary codes of conduct encouraged.
Examples: AI-powered spam filters, recommendation systems, video games using AI for non-players. Most AI applications fall here.
02 — Regulatory frameworks by jurisdiction
| Jurisdiction | Primary framework | Approach | Key requirements |
|---|---|---|---|
| European Union | EU AI Act (Reg. 2024/1689) | Binding, risk-based | Conformity assessments for high-risk AI; prohibited applications; mandatory transparency; GPAI provider disclosure |
| United States | Executive Order + NIST AI RMF + state laws | Voluntary federal; binding state-level | Safety testing disclosure for frontier models; New York City AI hiring law (Local Law 144); Colorado AI Insurance Act; California SB 1047 (modified) |
| United Kingdom | Pro-innovation, sector-based | Principles-based, voluntary | Sector regulators (FCA, ICO, CQC) apply existing rules to AI; AI Safety Institute for frontier model evaluation |
| China | Generative AI Service Provisions (2023) | Content-focused, mandatory | Security assessments before deployment; content moderation requirements; training data disclosure; no politically subversive content |
| Australia | Voluntary AI Ethics Framework + Mandatory Guardrails consultation | Moving from voluntary to binding | High-level principles adopted; mandatory guardrails for high-risk AI proposed in 2024 consultation |
03 — Compliance checklist for businesses deploying AI
If you're deploying AI in the EU or in a regulated US context, these are the baseline questions you need to be able to answer.
Classify your AI systems by risk tier. Does any AI you deploy or use fall into the EU AI Act's "high risk" categories (hiring, credit, healthcare, law enforcement, education, safety infrastructure)?
Identify your role. Are you a provider (building an AI system), deployer (using a third-party AI system), or both? Requirements differ. Deployers of high-risk AI must conduct human oversight procedures and keep logs.
Check your AI vendors. If you're using an AI API (OpenAI, Anthropic, Google), understand what their compliance status is for EU AI Act purposes. GPAI (general-purpose AI) providers have transparency and disclosure obligations.
Implement transparency. Any chatbot or AI system interacting with EU users must disclose that it's AI. AI-generated content must be labelled. Ensure your customer-facing AI meets this minimum requirement.
Conduct a risk assessment. For any high-risk application, document: intended purpose, foreseeable misuse, bias risks, data sources, accuracy requirements, human oversight procedures.
Check US state laws. If you're hiring in New York City or conducting insurance pricing in Colorado using AI, specific state laws apply regardless of EU status. More states are passing AI-specific legislation.
Yes — ChatGPT is legal in the EU. Under the EU AI Act, OpenAI is classified as a provider of a general-purpose AI model, which requires transparency about training data, compliance with copyright law, and publication of a model evaluation policy. Italy briefly banned ChatGPT in 2023 over GDPR concerns — that ban was lifted after OpenAI made compliance changes. Most standard business and consumer use of ChatGPT is in the "limited risk" or "minimal risk" tier.
Depends on what you're doing with AI. If you're using AI tools for internal productivity (writing, analysis, coding), you're largely in the minimal risk category with no significant compliance requirements. If you're deploying AI in hiring decisions, customer credit assessment, or healthcare-adjacent services, EU AI Act or US state laws may apply. The EU AI Act applies to anyone deploying AI that affects EU residents, regardless of company size or location.
The question now is whether you're ready.
The EU AI Act creates binding obligations for any organisation using AI in ways that affect EU residents. For most businesses, this means at minimum: transparency requirements for customer-facing AI, and review of any AI used in employment, credit, or healthcare decisions. The penalties for non-compliance with high-risk AI provisions are significant: up to €30 million or 6% of global annual turnover.
The organisations that treat AI compliance as a checkbox exercise will struggle. Those that treat it as a genuine opportunity to understand their AI systems better — to audit, document, and govern them properly — will be better positioned as regulation tightens globally over the coming years.
AI intelligence,
weekly.
Every week: the AI developments that matter, the regulatory updates worth knowing, and the tools worth trying. No hype. No filler.
Subscribe free →