The $100 Million Training Bill , Why Only Three Companies Can Afford to Build the Next Generation of AI
The electricity bill for a single training run now exceeds millions of dollars at the data centers that support frontier AI development, which are enormous, buzzing complexes of cooling towers and server racks that have been constructed quickly in Northern Virginia, Oregon, Ireland, and Singapore. Add the hardware: thousands of NVIDIA H200 or B200 GPUs that work nonstop for months, each costing tens of thousands of dollars to buy and much more to run at scale.
Add the data: at the frontier level, obtaining, cleaning, and licensing the text, code, and other materials that a model trains on is a nine-figure problem in and of itself. When you take into account that the models they are developing generate billions of dollars in revenue and determine which businesses survive the next ten years of technological advancements, it becomes apparent that senior AI scientists at top labs are receiving total compensation packages that approach or surpass $10 million annually. In 2026, it will cost more than $100 million to train a frontier AI model. That figure is rising rather than falling.
| Category | Details |
|---|---|
| Training Cost (2026) | Frontier AI model training now exceeds $100 million per run — driven by GPU clusters, data center costs, and energy requirements |
| Leading Frontier Labs | OpenAI, Google DeepMind, and Anthropic — the three organizations consistently positioned to train the most capable foundational models |
| Anthropic Funding (Feb 2026) | $30 billion Series G round — backed by Founders Fund, Coatue, and NVIDIA |
| Talent Costs | Top AI researchers at leading labs rumored to earn up to $10 million per person annually in total compensation |
| Hardware Required | Thousands of H200/B200 GPUs running for months — accessible only to companies with hyperscaler backing (Google, Microsoft, Amazon) |
| Wider AI Market Activity | 17 U.S. AI companies raised $100M+ in the first six weeks of 2026 — almost none building new foundational models |
| Notable Specialized Raises | Skild AI ($1.4B), ElevenLabs ($500M) — building applications on top of existing base models, not training new ones |
| “Build vs. Buy” Shift | By 2026, the vast majority of AI developers are fine-tuning or using existing models rather than attempting to build foundational models from scratch |
| Structural Dynamic | The economics resemble early cloud computing — massive upfront investment by a few, then near-zero marginal cost to serve API customers globally |
| Further Reading | AI investment and policy data at Stanford HAI AI Index |
The practical result of those economics is that there is a very tiny number that can support them. The three companies that are most frequently recognized for training the most competent foundational models—the massive language models that serve as the foundation for all other companies’ products—are OpenAI, Google DeepMind, and Anthropic.
Each of them has access to capital structures that are essentially distinct: Anthropic through a fundraising trajectory that resulted in a $30 billion Series G round in February 2026, backed by Founders Fund, Coatue, and NVIDIA; Google DeepMind through Alphabet’s balance sheet and infrastructure; and OpenAI through Microsoft’s multibillion dollar commitment and its own revenue growth. These are not appraisals of startups. Because that is what frontier AI training has become, these are the financial profiles of organizations that have obtained support at the size of significant infrastructure projects.
By most accounts, the larger AI investment market is very active in 2026. In just the first half of the year, seventeen American AI companies raised at least $100 million. $1.4 billion was raised by Skild AI. $500 million was raised by ElevenLabs. However, very few of these businesses are developing core models. They are expanding upon them, utilizing the APIs offered by OpenAI, Google, and Anthropic to develop goods in the fields of healthcare, coding support, legal research, creative tools, and numerous other vertical applications.
The “build vs. buy” dilemma that AI engineers faced three years ago has mostly been resolved: any business without a hyperscaler’s infrastructure cannot afford to construct a basic model from scratch. The application layer above the core models has grown incredibly productive as a result of the majority’s acceptance of that.
This produces a structural dynamic that is similar to something that has previously occurred in a different technological setting. Only a few businesses, including Amazon, Microsoft, and Google, could afford to create internet infrastructure in the early years of cloud computing due to the high capital required. They rented capacity to everyone else. The cloud providers benefited greatly from this, and a generation of software businesses were able to develop solutions that they would not have been able to operate otherwise.

One layer up, something similar is currently taking place. The infrastructure consists of the underlying models. The software companies that rent capacity are the application companies. The distinction is that, unlike mere storage and computation, the underlying technology—the models themselves, the judgments ingrained in their training, and the values they encode and reflect—has ramifications.
In a competitive industry that will eventually diversify, it is difficult to ignore the fact that the concentration of frontier AI capabilities in three organizations is not a passing trend. As costs decrease at lower capability tiers while continuing to accelerate at the frontier, it might be a structural element of how costly it has gotten to produce this technology that doesn’t go away on its own.
Regulators and policymakers have not fully addressed the question of whether that concentration becomes a problem—for competition, for governance, or for the diversity of values embedded in the systems that will increasingly influence economic and social decisions. This is partly because the answer necessitates forecasting a future that is still unclear.