Anthropic signs biggest compute deal yet with Google and Broadcom as run rate hits 30bn TNW

In short: Anthropic has agreed to access approximately 3.5 gigawatts of next-generation Google TPU compute capacity via Broadcom from 2027, its largest infrastructure commitment to date — while simultaneously disclosing that its revenue run rate has surpassed $30bn, more than tripling from roughly $9bn at the end of 2025.

Anthropic has announced it is securing multiple gigawatts of next-generation compute capacity through a new agreement with Google and Broadcom, while disclosing revenue growth figures that underscore why the AI lab now requires infrastructure at a scale that would have seemed implausible two years ago. The deal, announced on 6 April 2026, gives Anthropic access to approximately 3.5 gigawatts of Google tensor processing unit (TPU) capacity via Broadcom starting in 2027, building on the 1 gigawatt already being supplied to the company in 2026.

Krishna Rao, Anthropic's chief financial officer, described it as “our most significant compute commitment to date,” adding that the agreement represents a continuation of the company's “disciplined approach to scaling infrastructure.” The majority of the new capacity will be located in the United States, extending Anthropic's November 2025 commitment to invest $50bn in American AI computing infrastructure.

Three parties, one infrastructure layer

The announcement is as much about Broadcom as it is about Anthropic or Google. Under the new arrangement, Broadcom acts as the intermediary layer between Google's custom silicon and Anthropic's training and inference workloads. In parallel, Broadcom has signed a separate long-term agreement with Google to design and supply future generations of custom TPU chips, and a supply assurance agreement to provide networking and other components for Google's next-generation AI data racks through 2031.

This makes Broadcom an increasingly indispensable node in the AI infrastructure graph. The chipmaker, led by CEO Hock Tan, is not building AI models; it is building the silicon and the interconnects on which AI models are built. Broadcom shares rose approximately 3% in extended trading on the announcement, a reaction that reflects investor appetite for companies positioned at the physical layer of the AI stack rather than the application layer on top of it. Analysts at Mizuho, led by Vijay Rakesh, estimated that Broadcom would record $21bn in AI revenue from Anthropic in 2026 alone, rising to $42bn in 2027, figures that, even as projections, illustrate the financial weight of what is being committed.

Broadcom had first signalled the scale of its Anthropic relationship in September 2025, when Hock Tan disclosed during an earnings call that a mystery customer had placed a $10bn order for custom TPU racks. In December 2025, he confirmed the customer was Anthropic, and that an additional $11bn order had since followed. The April 2026 announcement is the third act of the same story: a partnership that has now graduated from a reported $21bn commitment to multi-gigawatt infrastructure with a defined delivery timeline.

Revenue and customers: the numbers driving the infrastructure

The compute deal is intelligible only against the backdrop of Anthropic's commercial growth. The company says its run-rate revenue has now exceeded $30bn, up from approximately $9bn at the end of 2025. That trajectory — more than a threefold increase in roughly three months, is the result of a compounding enterprise sales motion that accelerated sharply after Anthropic closed its Series G funding round on 12 February 2026. That round raised $30bn at a post-money valuation of $380bn, led by GIC and Coatue, and co-led by D.E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, and MGX.

When the Series G closed, Anthropic reported that more than 500 business customers were each spending over $1m on an annualised basis. As of the April announcement, that number has exceeded 1,000, doubling in less than two months. The pace of enterprise adoption is the proximate cause of the compute expansion: more revenue requires more inference capacity, more inference capacity requires more training compute, and more training compute requires more gigawatts.

Claude's multi-cloud architecture

What distinguishes Anthropic's infrastructure approach from many of its peers is an explicit multi-vendor chip strategy. Claude is trained and served across three hardware platforms: Amazon's Trainium chips, Google's TPUs, and Nvidia GPUs. Anthropic says Claude is the only frontier model available on all three major cloud platforms, AWS, Google Cloud, and Microsoft Azure,  a claim that carries commercial as well as technical significance.

The multi-vendor stance gives Anthropic both resilience and negotiating leverage. If capacity is constrained on any single platform, workloads can shift. If one chipmaker faces supply disruption, export controls, or pricing pressure, Anthropic is not exposed to the full force of that shock. The strategy has precedent: Microsoft's own AI models reflect a similar instinct to hedge against single-vendor dependence, though in Microsoft's case the hedge is against a partner rather than a hardware supplier.

The AWS relationship remains foundational. In late 2024, Anthropic named Amazon its primary cloud and training partner, with total Amazon investment reaching $8bn. Project Rainier, an Anthropic supercomputer cluster running roughly 500,000 Amazon Trainium 2 chips in Indiana, is expected to scale beyond one million Trainium 2 chips by the end of 2025. The Google relationship, which now extends through the new Broadcom deal to multi-gigawatt scale in 2027, sits alongside this rather than replacing it.

The US infrastructure commitment

The April deal is framed explicitly as an extension of Anthropic's November 2025 domestic infrastructure pledge: a $50bn commitment to American AI computing infrastructure, developed initially in partnership with Fluidstack, the UK-based neocloud operator, with data centre sites in Texas and New York coming online through 2026. The new Broadcom capacity, the majority of which will be US-based, expands that footprint into 2027 and beyond.

This domestic emphasis is not incidental. The Trump administration's AI Action Plan has explicitly targeted US-based compute capacity as a strategic priority, and Anthropic, like its peers, has positioned its infrastructure investments accordingly. Whether that alignment reflects sincere strategic conviction or tactical regulatory positioning — or both — the practical effect is the same: a substantial share of the world's next-generation AI training capacity is being locked into American geography.

What the deal says about the compute arms race

The Anthropic-Google-Broadcom announcement is a data point in a pattern that has been building for 18 months. SoftBank's $40bn bridge loan to fund its OpenAI commitment reflected the same underlying dynamic: AI labs have grown so fast that their compute requirements now exceed what can be financed from revenue alone, requiring financial engineering at a scale once reserved for infrastructure utilities. Meta's $27bn infrastructure deal with Nebius reflects a parallel logic at the hyperscaler level.

The compute arms race is also reshaping how AI companies manage their relationships with the services built on top of their models. Anthropic has been attentive to this: the company recently moved to restrict access to Claude via certain third-party frameworks, a decision that illustrated how the cost dynamics of frontier model inference are forcing AI labs to make difficult choices about which use cases they subsidise and which they price explicitly.

For Broadcom, the trajectory is simpler: a chipmaker that was not widely discussed in the context of AI two years ago is now a load-bearing element of the infrastructure on which two of the world's most consequential AI models, Google's Gemini and Anthropic's Claude — are built and served. That position, cemented through 2031 for Google's custom silicon and through the new multi-gigawatt agreement for Anthropic's TPU access, is the real story beneath the headline numbers. Nvidia remains the dominant force in AI accelerators, and firms like Nvidia's enterprise AI platform continues to expand its reach. But Broadcom's rise as the custom silicon partner of choice for hyperscale AI compute is one of the defining semiconductor industry shifts of this decade.

Also tagged with