The 2026 AI Index from Stanford's Institute for Human-Centered AI documents a deepening disconnect between expert optimism and public anxiety. Gen Z anger about AI is rising fast. Employment in AI-exposed fields among younger workers is already declining. And the US has the lowest trust in its government to regulate AI of any country surveyed.
Stanford University's Institute for Human-Centered Artificial Intelligence (HAI) released its annual AI Index Report on Monday, and its most striking finding is not about model performance or investment volumes, it is about the widening gulf between those building AI and those living with it.
Across nearly every dimension the 423-page report examines, expert opinion and public sentiment point in opposite directions.
“AI experts and the US public disagree on nearly everything about AI's future,” the report concludes, with the notable exception that both groups believe AI will hurt elections and personal relationships.
The numbers are stark. A Pew Research study published last month, cited by the report, found that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life.
Among AI experts surveyed for the same report, 56% said they believed AI would have a positive impact on the US over the next 20 years.
The gap is largest around the economy and jobs: 69% of experts felt AI would benefit the economy, against 21% of the general public. On whether AI will have a positive impact on how people do their jobs, 73% of experts said yes, compared with 23% of the public.
And while 84% of experts said AI would largely benefit medical care over the next 20 years, only 44% of the US public agreed. Meanwhile, nearly two-thirds of Americans, 64%, believe AI will lead to fewer jobs over the next 20 years.
The report notes that employment among younger workers in AI-exposed fields has already started to decline, suggesting the public's concern is not merely theoretical.
Gen Z's relationship with AI is particularly revealing. A Gallup poll conducted for the Walton Family Foundation and GSV Ventures in February and March 2026, surveying 1,572 people aged 14–29, found that the share of Gen Z respondents who describe themselves as excited about AI fell from 36% in 2025 to 22% in 2026.
The proportion feeling hopeful dropped from 27% to 18%. The proportion feeling angry rose from 22% to 31%. This is happening even though around half of Gen Z uses AI either daily or weekly.
Gallup's senior education researcher Zach Hrynowski attributed the rising anger to AI dimming prospects for entry-level workers, noting that the oldest members of Gen Z, those most exposed to the job market, are the angriest.
On regulation, the disconnect is geographically notable. The US reported the lowest trust in its own government to regulate AI of any country surveyed, at 31%. Singapore ranked highest, at 81%.
Globally, 41% of Americans said federal AI regulation would not go far enough, while only 27% said it would go too far. The EU is trusted more than the US or China to regulate AI effectively, according to a separate Pew survey of 25 countries.
The report also documents the gap between AI's technical achievements and its societal costs. AI reached 53% of the population faster than the personal computer or the internet. Documented AI incidents, defined as harms or near-harms from deployed AI systems, reached 362 in 2025, up from 233 in 2024, as 88% of organisations now report using AI.
The environmental footprint is growing correspondingly: training xAI's Grok 4 is estimated to have produced more than 72,000 tonnes of CO₂, and the water required for GPT-4o inference workloads is said to be enough to sustain 12 million people.
The report notes, with some irony, that despite AI's rapid advance, the best frontier models still read analog clocks correctly only around 50% of the time, compared with roughly 90% for unspecialised humans.
Stanford's report acknowledges its own limitations: it is financially supported by Google, OpenAI, and others, and was produced with assistance from ChatGPT and Claude.
Its finding that “Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply” lands as an implicit critique of the very organisations that helped fund its publication.