>$500 B
Annual hyperscaler capex on AI
Profitability
The gap between AI investment and the real value it generates — and what companies can do to be in the 5% that works
The industry invests more than $ 500 billion annually in AI infrastructure. Only 5 % of companies generate real value at scale.
Hyperscalers spend >$ 500B/year on AI. Meta: $ 115–135 B. Microsoft: $ 120B+. But only 5 % of companies create value at scale (BCG). Those that succeed focus on 3–4 use cases, measure results, and do not automate without a plan. Those that fail scatter resources across 6+ initiatives without clear ROI.
>$500 B
Annual hyperscaler capex on AI
5 %
Companies with real value at scale
75 %
Projects without expected ROI
500-1000 %
Average error in cost estimation
75 B$
30% of total
2023
131 B$
45% of total
2024
259 B$
61% of total
2025
189 B$ (Feb only)
83% of total
Q1 2026
Investment is ahead of technical improvement and monetisation
| Series | Trend | Reading |
|---|---|---|
| Investment (capex) | Strong acceleration | exponential up |
| Enterprise adoption | Sustained growth | linear up |
| Sustainable monetisation | Slow growth | slow linear |
More than $500 billion annually. That is what the AI industry invests in infrastructure. Meta has raised its capex guidance to $115–135 billion. Microsoft exceeds $120 billion annualised. Amazon approaches $200 billion.
Only 5%. That is what generates real value at scale with AI.
That figure, published by BCG in 2025, should be the first slide in any board meeting evaluating AI investments. Not because AI does not work — it does — but because the distance between “the technology works” and “the investment has a return” is enormous, and most organisations cross it badly.
The concentration of capital in AI has no precedent in the history of technology:
$189 billion in a single month. In February 2026, 83% of that global funding was concentrated in three companies. It is the most extreme concentration in the history of the technology sector.
More is being invested than ever. Less than expected is being obtained. The gap between capital and return does not close — it widens.
What defines the current situation is that three fundamental curves are advancing at very different speeds:
Technical improvement — It is decelerating. Each performance leap costs more and is more incremental. The step from GPT-4 to GPT-5 cost significantly more than from GPT-3 to GPT-4, with proportionally smaller improvements.
Adoption — It is rapid but superficial. Many companies use AI, but few integrate it into critical business processes. The adoption gap between large companies and SMEs doubles compared to other technologies.
Sustainable monetisation — It is the slowest. Most generative AI business models still do not demonstrate healthy margins at scale. The case of Sora (total revenues of $2.1 million vs. costs of $15 million daily before its shutdown in March 2026) is a brutal reminder of how hard it is to convert technical capability into viable business.
When investment is ahead of all three curves, there is a problem. Not necessarily a bubble — hyperscaler capital is backed by cash, not debt — but an imbalance that sooner or later corrects itself.
Analysing documented failure patterns, the causes repeat with a revealing consistency.
Companies that fail try to address 6 or more use cases simultaneously. Those that succeed focus on 3–4 and take them to production before expanding.
The temptation to “do something with AI in every department” is the most expensive way to achieve nothing.
Many AI projects are launched without a clear success metric. “Improve productivity” is not a metric. “Reduce claims processing time from 4 hours to 20 minutes” is.
Without measurement, there is no possible optimisation.
The API price is the visible part of the iceberg. Below is everything else:
The API bill can be 20% of the real cost. The other 80% does not appear in any pitch deck.
AI that eliminates tasks without offering alternatives to the people affected generates internal resistance, loss of institutional knowledge, and reputational risk.
Successful companies use AI to free up their teams’ time, not to eliminate them.
There is a tendency to use the most powerful available model for any task. But using GPT-5 to classify emails is like using a 40-tonne truck to go grocery shopping.
The difference in consumption — and cost — between a flash model and a frontier one can be x10 for the same result.
Companies that do generate value at scale with AI share a consistent pattern. It is not luck — it is method.
They focus. 3–4 well-defined use cases, with clear metrics, taken to full production before adding more.
They measure obsessively. Not just model performance — they measure business impact. Time saved. Errors avoided. Incremental revenue. Total cost of ownership.
They invest in people. 70% of their resources go to people and processes, not technology. AI is the tool; the team is what makes it work.
They start small. Prototype in weeks, pilot in months, scale in quarters. They do not try to transform the entire organisation at once.
They choose the right model. Not the most powerful — the most appropriate for each task. A well-tuned 3 billion parameter model can outperform a 400 billion parameter one poorly applied.
The success pattern is not spending more. It is focusing, measuring, and scaling only what works.
The question “is AI in a bubble?” does not have a binary answer. Reality is more nuanced:
The most useful analogy is the fibre optic bubble of 1998–2000: the infrastructure proved valuable in the long term, but original investors lost money. AI will probably follow a similar pattern — the technology stays, but not all invested capital will have a return.
It is not the time to not invest in AI. It is the time to invest with criteria. Every euro must have a clear return metric, a realistic timeline, and a plan B.
If you lead a company: Before approving an AI investment, demand answers to three questions: what business metric does it improve? What is the total cost of ownership (not just the API)? What happens to the people whose tasks are automated? If there is no clear answer to all three, the project is not ready.
If you are evaluating AI providers: Ask for energy consumption data per service. Under the European CSRD framework, your carbon footprint includes the services you contract. If a provider cannot tell you how much its service consumes, ask yourself why.
If you are a CTO or technical lead: Start with the smallest model that resolves the task. Scale only when the data justifies it. Measure the total cost — not just latency and accuracy, but also energy and maintenance.
If you are a professional concerned about your employment: AI does not replace all professionals, but it does transform most professions. Investing in understanding the tools — what they can do, what they cannot, what they really cost — is the best professional investment you can make right now.
For everyone: Take our sustainable AI maturity test to assess where your organisation stands and what steps to take.
Related
La guía definitiva del consumo energético por modelo y modalidad en 2026
Manifiesto AISHA: por qué defendemos la inteligencia artificial y por qué exigimos que se use de forma responsable
The maturity test is designed to detect gaps in governance, focus and real execution capacity.
Take maturity test