VeltrixVeltrix.
← All articles
22 / 62March 20, 2026

The 8 Biggest Ethical Issues With AI in 2026 (And What's Being Done About Them)

Bias, privacy, misinformation, IP, accountability, labour displacement, power concentration, and environmental cost — with real regulatory actions.

AI ethics isn't an academic exercise. Every AI system embodies choices about whose values matter, whose data was used, who bears the risks, and who benefits. Those choices have real-world consequences for real people — and the scale of AI deployment means small ethical failures can affect millions simultaneously.

The EU AI Act (fully in force by August 2026) is the most comprehensive response to AI ethics concerns globally, imposing binding requirements on AI systems used in high-stakes decisions. EUAI The US has issued executive orders and voluntary guidance. China has its own AI regulations focused on content and algorithmic transparency. CHINA None of these frameworks fully address all eight issues below.

01
Bias and discrimination

AI systems trained on historical data reproduce and amplify historical inequities. Amazon's internal hiring AI downgraded resumes from women because it trained on hiring data from a male-dominated industry. AMZN Facial recognition systems have false positive rates 10-100x higher for Black and East Asian faces. NIST Healthcare algorithms systematically under-assigned Black patients to care programmes. The problem isn't that AI is racist — it's that the data it learned from reflects societies that were.

What's being done: NIST AI Risk Management Framework includes fairness requirements. EU AI Act classifies biometric identification and hiring AI as "high risk" with mandatory conformity assessments. Several companies have published algorithmic auditing reports.
02
Privacy and surveillance

Clearview AI scraped 30 billion facial images from social media without consent and sold law enforcement access to a system for identifying anyone from a photo. CLAIR Employee monitoring tools using AI track keystrokes, webcam activity, and infer emotional states. AI-powered advertising systems build detailed behavioural profiles from activity across thousands of websites. People have largely not consented to this surveillance architecture — it emerged through incremental product decisions, not collective choices.

What's being done: GDPR in Europe provides some data protection. Several US states have passed facial recognition bans. EU AI Act bans real-time remote biometric surveillance in public spaces (with exceptions for terrorism).
03
Misinformation and deepfakes

Generative AI dramatically reduces the cost of creating convincing false content. AI-generated audio of politicians saying things they never said has circulated before elections in multiple countries. Deepfake pornography of non-consenting individuals is a documented harm. AI-generated "news" articles with fabricated quotes have been published as legitimate journalism. The information verification problem that predated AI has been made structurally harder. STAN

What's being done: EU AI Act requires mandatory labelling of AI-generated content. Several AI companies are implementing watermarking. Meta, Google, and YouTube have policies on AI-generated political content in elections.
04
Intellectual property

Generative AI models are trained on copyrighted books, images, music, and code without permission or compensation to creators. Artists have found their distinctive styles reproduced by AI image generators. Authors including Sarah Silverman and George R.R. Martin have filed suit against AI companies over training data. Getty Images sued Stability AI over images. The legal question — does AI training on copyrighted content constitute infringement? — is actively being litigated and remains unresolved. GETTY

What's being done: Several cases are progressing through US courts. Some AI companies (Adobe Firefly, Shutterstock AI) trained only on licensed content. The EU AI Act requires disclosure of training data. No clear legal settlement yet.
05
Accountability gaps

When an AI system makes a wrong decision that harms someone, who is responsible? The algorithm? The company that built it? The company that deployed it? The user who provided the input? Current legal frameworks weren't designed for distributed, opaque AI decision-making. A parole algorithm that contributed to excessive incarceration, a mortgage AI that discriminated by race, a hiring tool that systematically excluded qualified candidates — accountability is murky in each case.

What's being done: EU AI Act creates liability requirements for high-risk AI systems. Several US states have passed algorithmic accountability laws. The legal framework is evolving faster than the technology — which is unusual and worth noting.
06
Labour displacement

The economic disruption from AI automation is real and uneven. Entry-level knowledge workers face the steepest task displacement. Junior writers, coders, customer service agents, data analysts — roles that provided entry points to careers are being automated first. The workers displaced tend to be younger, lower-paid, and less protected. The gains accrue to the companies deploying AI. WEF The net employment effect may be positive in aggregate — but aggregate statistics don't pay rent for individual people whose roles have been eliminated.

What's being done: Limited policy responses so far. Some countries are exploring AI taxes or universal basic income pilots. Training and reskilling programmes are expanding but not at the scale or speed of displacement.
07
Concentration of power

The most capable AI systems require hundreds of millions of dollars to train, enormous data centres, and proprietary datasets. This creates structural advantages for a small number of companies — primarily Google, Microsoft/OpenAI, Meta, Amazon, and Anthropic. The companies that control AI infrastructure have unprecedented leverage over knowledge production, communication, and economic activity. The open-source movement (Meta's LLaMA, Mistral) provides partial counterweight, but the gap between frontier closed models and open models remains substantial. STAN2

What's being done: EU competition authorities are investigating AI market concentration. Some governments are funding national AI infrastructure. Open-source models are narrowing the capability gap with each cycle.
08
Environmental cost

Training large AI models consumes significant computational resources. GPT-3's training run emitted roughly 552 tonnes of CO2 equivalent. ENV Data centre electricity consumption for AI inference at scale is growing rapidly — Goldman Sachs projected AI data centre power consumption would increase 160% by 2030. The environmental cost is real, concentrated, and largely borne by communities near data centres rather than by the companies or users benefiting.

What's being done: Several AI labs have committed to carbon neutrality. Architecture improvements (more efficient models like DeepSeek V3) reduce per-inference costs. Renewable energy procurement for data centres is increasing but not keeping pace with demand growth.
Sources
EUAI
European Commission — EU AI Act, 2024eur-lex.europa.eu
NIST
NIST — Face Recognition Vendor Test 2019nist.gov
GETTY
Getty Images v. Stability AI — D. Del. 2023courtlistener.com
ENV
Patterson et al — Carbon and the Broad Landscape of Digital Operations, 2021arxiv.org/abs/2104.10350
WEF
WEF — Future of Jobs Report 2025weforum.org
STAN2
Stanford HAI — AI Index Report 2025aiindex.stanford.edu
The ethical problems with AI aren't abstract.
They're documented harms affecting real people now.

Every item on this list has a real court case, a real regulatory action, or a real documented harm attached to it. These aren't concerns for a future version of AI — they're issues with the AI systems currently deployed across healthcare, hiring, credit, law enforcement, and media.

The most important ethical question about AI isn't "what could go wrong in the future?" It's "who is responsible for fixing what's already going wrong?" The regulatory frameworks are arriving, but enforcement capacity, funding, and political will remain uncertain. In that gap, the organisations deploying AI and the professionals using it have genuine ethical responsibilities — which start with understanding what those responsibilities are.

Stay ahead of the curve

AI intelligence,
weekly.

Every week: the AI developments that matter, the tools worth trying, and the data behind the headlines. No hype. No filler.

Subscribe free →

Veltrix Collective · Sources: EU AI Act, NIST, Reuters, NYT, WEF, Stanford HAI, Patterson et al. Published April 2026.

Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.