Chinese courts rule AI replacement is not legal grounds for firing workers as global tech layoffs hit 78000

TL;DR

Chinese courts in Hangzhou and Beijing have ruled in two separate cases that companies cannot fire workers simply to replace them with AI, establishing that AI adoption is a strategic business choice rather than an unforeseeable change in circumstances under China's Labour Contract Law. The rulings arrive as 78,000 tech workers have been laid off globally in early 2026 with nearly half attributed to AI, and create a stark contrast with the US and EU, where no equivalent legal protection exists.

A quality assurance supervisor identified only as Zhou joined a technology company in Hangzhou in November 2022. His job was to work with AI large language models, optimising their outputs and filtering sensitive content. He earned 25,000 yuan per month, roughly $3,640. In 2024, the company decided that its AI systems had improved to the point where Zhou's role could be automated.

It reassigned him to a lower-level position with a 40 per cent pay cut, reducing his salary to 15,000 yuan. Zhou refused. The company fired him. Zhou filed for arbitration. The arbitration panel ruled the dismissal unlawful. The company appealed. The Hangzhou Intermediate People's Court upheld the ruling.

The court found that a company's decision to adopt AI is a strategic business choice, not an unforeseeable change in objective circumstances, and therefore does not qualify as legal grounds for termination under China's Labour Contract Law. The company was ordered to pay compensation. The ruling, published this week, is the second Chinese court decision in six months to establish the same principle: you cannot fire a worker in China simply because an AI can now do their job.

The precedent

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The first case was decided in Beijing. An employee surnamed Liu had worked as a data collector at a technology company since 2009, responsible for traditional manual map data collection. In early 2024, the company shifted entirely from manual collection to AI-driven automated data collection, cancelled its navigation products department, and terminated Liu's contract, citing a major change in objective circumstances that made the contract unperformable.

The Beijing Municipal Human Resources and Social Security Bureau published the case in December 2025 as one of its ten most significant labour arbitration decisions of the year. The arbitration panel ruled that the introduction of AI fell within the scope of the employer's autonomous business decisions and represented technological innovation proactively implemented to adapt to market conditions.

Such decisions, the panel found, may require adjustments to job structures, but those adjustments fall within the risks an employer should reasonably foresee during normal business operations. The company sued to overturn the arbitration. Both the trial court and the appeals court upheld the ruling.

The legal reasoning in both cases turns on Article 40 of China's Labour Contract Law, which permits termination when objective circumstances materially change and render a contract unperformable. The provision is typically applied to events genuinely beyond the employer's control: force majeure, government-mandated relocations, production suspensions caused by regulatory changes.

Chinese courts have now determined, in two separate jurisdictions, that AI adoption does not meet this standard. The technology was not imposed on the companies. It was chosen by them. The courts drew a distinction between an external shock that makes a job impossible and an internal decision that makes a job redundant. The first is a legal basis for termination. The second is not.

The context

The rulings arrive at a moment when the global technology industry is cutting jobs at a pace not seen since the post-pandemic corrections of 2022 and 2023. More than 78,000 technology workers were laid off in the first four months of 2026, and nearly half of those cuts were directly attributed to AI replacing human roles. Meta cut approximately 8,000 positions in May alone, with every major restructuring announcement citing AI as the primary driver.

Oracle eliminated between 20,000 and 30,000 employees in March. Block's chief executive stated that the company's reduction from 10,000 to 6,000 employees was driven by growing AI capabilities. Meta's restructuring is the clearest example of the pattern: traditional roles are eliminated, the savings are redirected to AI infrastructure, and the headcount that remains is reoriented around building and operating AI systems rather than performing the tasks those systems are replacing.

China is narrowing the gap with the United States on AI performance while spending a fraction of what American companies invest in compute. The country has no interest in slowing the adoption of AI in its economy. China launched a months-long enforcement campaign against AI misuse in 2026, targeting deepfakes, fraud, and disinformation, and has introduced mandatory labelling standards for AI-generated content and new regulations governing AI chatbots and virtual human services.

The government's approach is not to restrict AI but to regulate its applications while ensuring that the economic benefits do not come at the expense of social stability. China's urban youth unemployment rate reached 15.3 per cent in March, and the political sensitivity of mass layoffs in an economy that is already struggling with deflation, a property crisis, and weak consumer demand makes the court rulings as much about maintaining order as about interpreting contract law.

The comparison

The United States has no equivalent protection. American employment law operates on an at-will basis in every state except Montana, meaning employers can terminate workers for any reason that is not specifically prohibited by statute, and being replaced by AI is not a prohibited reason.

A Senate bill has been introduced that would require companies to file quarterly reports to the Department of Labor identifying how many employees were laid off because their functions were automated by AI, but the legislation has not passed and is not expected to in the current Congress. Illinois requires employers to notify workers if AI is used in hiring, discipline, or discharge decisions. Colorado's AI Act, taking effect in mid-2026, mandates risk management policies and annual assessments of AI's impact on employment decisions. Neither state has enacted anything resembling what Chinese courts have established: a legal principle that says AI replacement alone is not grounds for firing someone.

The European Union's AI Act addresses AI in employment by classifying AI systems used for recruiting, screening, performance evaluation, and other workplace decisions as high-risk, subject to requirements for human oversight, worker notification, and logging. The high-risk obligations take full effect in August 2026. But the AI Act does not prohibit AI-driven layoffs. It regulates how AI is used in employment decisions, not whether a company can eliminate positions because of AI.

The European Trade Union Confederation has called for stronger protections, and legal scholars have proposed a European AI Social Compact that would combine employment support, training, and social protections to cushion displacement. None of these proposals have been enacted. The gap between China's position and the West's is not that Europe and America are unaware of the problem. It is that they have chosen, so far, not to solve it through the courts or through legislation.

The tension

The Chinese rulings create a legal framework that is coherent on its own terms but produces a genuine tension for companies operating in the country. If AI adoption is a strategic business choice rather than an unforeseeable change in circumstances, and if strategic business choices cannot justify termination, then companies that invest in AI systems that automate existing roles must either retrain the workers those systems replace, reassign them to equivalent positions at equivalent pay, or continue employing them in roles that the company has determined are no longer necessary.

The courts have said that the costs of technological transformation should not be borne solely by workers. The implication is that they should be borne by the companies that chose to transform.

What AI is actually doing to jobs is more complicated than the headlines suggest. Roughly 71 per cent of European firms are reconsidering job responsibilities because of AI, but reconsidering is not the same as eliminating. Klarna fired 700 customer service workers and replaced them with an AI chatbot in 2024, only to begin rehiring human agents in 2026 after repeat contacts jumped 25 per cent and customer satisfaction deteriorated on complex interactions. The CEO admitted publicly that the strategy had failed.

The pattern across the early adopters is that AI replaces tasks more effectively than it replaces jobs, and that the companies which cut deepest are often the first to discover that the remaining human work, the judgment, the escalation, the context that the model cannot hold, is more valuable than they estimated when they decided to automate.

China's courts have not said that companies cannot use AI. They have said that companies cannot use AI as a pretext to fire people. The distinction matters because it forces a specific organisational behaviour: if you automate a role, you must find another role for the person who held it, at comparable terms. That is expensive. It is also, the courts have decided, the law.

Whether it makes Chinese companies less competitive or more resilient will depend on whether AI actually replaces the workers or merely changes what the workers do. The early evidence, from Klarna to the 78,000 layoffs to the courts in Hangzhou and Beijing, suggests that the answer is not yet clear, and that China has decided it would rather err on the side of the worker until it is.