Google’s AI Overhaul Is Costing $100 Billion, Shareholders Are Starting to Ask Hard Questions
During a promotional event in Paris in February 2023, Google’s brand-new chatbot, Bard, gave an incorrect response to a straightforward question regarding the James Webb Space Telescope. Anyone who was paying attention at the time can recall this incident. In a matter of hours, astronomers on Twitter pointed out the error, bloggers criticized it, and the market penalized it. Alphabet’s stock fell by almost 8%. Before trading ended, a market value of $100 billion disappeared. It seemed like a bad day at the time. It appears more like a prologue in retrospect.
The hundred-billion figure is still present three years later. The categories have simply changed. What used to be a market-cap wipeout is now essentially the yearly admission fee for the frontier AI race. The head of Google DeepMind, Demis Hassabis, publicly stated that achieving what he refers to as “full AI” would be extremely expensive, possibly exceeding that same hundred billion. Investors have begun to notice that it is becoming more difficult to distinguish between a strategic commitment and an embarrassing loss.
| Parent Company | Alphabet Inc. |
| CEO | Sundar Pichai |
| AI Chief | Demis Hassabis, CEO of Google DeepMind |
| Notable Early Stumble | Bard launch error (Feb 2023) erased ~$100 billion in market cap |
| Projected AI Spend (per Hassabis) | $100 billion-plus to reach “full AI” |
| Big Four Hyperscaler CapEx (2026) | Expected to exceed $650 billion combined |
| OpenAI Projected Compute Spend by 2028 | Roughly $121 billion |
| Inference Share of AI Compute (by 2026) | Projected around 65% |
| Google Search Market Share | Over 80% (vs. Bing at ~9%) |
| Hardware Bet | Custom TPU chips vs. reliance on NVIDIA GPUs |
| Relevant Regulator | U.S. Securities and Exchange Commission |
| Shareholder Concern | Long payback horizon, inference costs, margin compression |
By almost all historical standards, the expenditure is exceptional. Alphabet, Microsoft, Amazon, and Meta—the four largest hyperscalers—are expected to spend over $650 billion on capital projects in 2026 alone. Google owns a sizable portion of that, which goes toward TPU clusters, cooling infrastructure, power contracts, fiber, and the kind of busy, dense data centers that need their own substations. The scale physically registers when you pass one in Iowa or The Dalles, Oregon. These aren’t workplaces. They are corporate badge-wearing industrial sites.

The software industry was magical for years because it seemed to defy gravity. higher than 80% gross margins. almost no marginal expenses. infinite scope. That model is broken by AI. It resembles the telecom buildouts of the late 1990s more than Silicon Valley because it is compute-bound, energy-hungry, and capital-intensive. In a recent article, John Furrier referred to it as “Infrastructure Gravity”—a helpful term for an unsettling reality. There is a real dollar cost associated with each free ChatGPT prompt, Gemini query, and AI Overview on Google Search; this cost does not decrease simply because the answer is provided more quickly.
Inference is the hidden issue that investors are starting to bring up more frequently during earnings calls. The documentary voiceover and the headlines come from training a model. The money silently disappears in inference, which is the true ongoing cost of responding to every question, every time. Serving AI responses now accounts for more than half of the major providers’ revenue, according to analysts. Inference may make up about 65% of all AI computation by 2026. It could account for 80–90% of the total cost over the course of a model’s lifetime. According to reports, GPT-4 generated about $2.3 billion in inference expenses in just two years at a training cost of approximately $150 million. a multiplier of fifteen times. It’s the kind of math that causes a CFO to spend some time staring at a wall.
Listening to Alphabet’s recent earnings calls gives the impression that the company’s executives are still promoting a narrative from 2023, even though the figures indicate a narrative from 2026. Pichai keeps bringing up the notion that AI will change everything—including workspace, maps, and search. And it most likely will. However, the transformation narrative by itself is no longer appealing to investors. When spending reaches a plateau is what they want to know. They want to know when the AI Overviews, which Google discreetly reduced in 2024 following embarrassing mistakes, will truly generate more revenue than they expend. When OpenAI and Anthropic continue to find ways to siphon off particular verticals, they want to know if the moat surrounding Search—which is still more than 80% of the market—actually holds.
Since Microsoft’s Bing-ChatGPT moment in early 2023, it’s difficult to ignore how the atmosphere has changed. The story was existential at the time. There would be a disruption at Google. Google appeared fortunate when the disruption didn’t fully materialize. Then the cost of staying ahead began to mount, and Google now appears pricey. In about 36 months, three distinct stories were told, all with total conviction. In contrast, the stock has largely continued to rise, which may not be entirely positive but does reveal something about investor discipline during this time.
It’s actually unclear where it ends up. Spending begins to resemble a moat if Alphabet’s TPU program can bend the cost curve more quickly than demand scales. If it can’t, the hundred-billion figure becomes a recurring line item instead of a one-time wager, and the company ends up sharing an increasingly painful margin picture with every other hyperscaler. As of right now, only a small percentage of shareholders are raising serious concerns. Compared to a year ago, they are now louder. Next year, they’ll be even louder. The question that no one can yet answer is whether anyone in the Plex is actually paying attention or is just running the numbers and hoping the models get cheaper quickly enough.