Google and Anthropic on Thursday announced a landmark agreement under which Google will supply up to one million of its custom-built Tensor Processing Units (TPUs) to the AI startup, marking one of the largest chip deals in the generative AI era.
The memorandum of understanding, valued in the “tens of billions of dollars,” will deliver well over one gigawatt of computing capacity to Anthropic by 2026 — a quantum leap in infrastructure terms, given that a gigawatt of compute is roughly equivalent to powering 350,000 homes.
Under the agreement, Anthropic will utilize Google’s TPUs to train and serve its next-generation model family, including the enterprise-oriented Claude chatbot.
In a blog post, Anthropic said the deal significantly expands its compute resources while deepening its long-standing partnership with Google Cloud.
For Google, the move underscores its bid to become not just a platform provider but a critical infrastructure enabler in the AI arms race. By opening up its TPU fleet to an external partner at scale, it positions itself as an alternative to rival hardware suppliers such as Nvidia and strengthens its influence in the fast-growing market for generative AI infrastructure.
Analysts say the timing and scale of the agreement reflect the fierce competition to secure hardware and cloud services at a moment when both training and serving large AI models demand enormous compute and power budgets. With Anthropic already relying on Amazon Web Services and Nvidia hardware, the expansion into Google’s chips signals a multi-platform strategy aimed at resilience and performance.
Anthropic, founded in 2021 by former OpenAI executives, was recently valued at about $183 billion following a $13 billion fundraising round. The company has emphasized safety, alignment, and enterprise use cases for Claude, targeting business clients amid mounting demand for generative AI solutions.
In a statement, Google Cloud CEO Thomas Kurian said Anthropic’s decision to expand its use of TPUs reflected the “strong price-performance and efficiency” its teams have experienced over several years. Anthropic noted that the collaboration would help ensure the responsible deployment of Claude models at scale.
While the financial terms were not disclosed beyond the “tens of billions” estimate, analysts expect it to be among the largest infrastructure deals ever signed between a cloud provider and an AI company. Industry experts have suggested that such large-scale agreements may draw attention from competition and national security regulators in the United States and Europe.
For Anthropic, the partnership offers a competitive edge in model training speed, scaling capacity, and cost efficiency. For Google, it reinforces its cloud and AI hardware ecosystem at a critical moment when the company faces growing competition from Amazon, Microsoft, and Nvidia in the enterprise AI infrastructure market.
Both companies have said they plan to maintain multi-cloud and multi-hardware strategies, indicating that the partnership is not exclusive.
With deliveries set to ramp up in 2026, the agreement could set a precedent for how AI developers and infrastructure providers collaborate to meet the ever-growing demand for computing power in the generative AI era.
Also Read: Starlink Begins Security Trials in India Ahead of Commercial Launch