Abstract graph showing the gap between massive investment and real return in AI — ascending investment bars versus flat return line

75% of AI projects generate no return. How to avoid being one of them

The gap between AI investment and the real value it generates — and what companies can do to be in the 5% that works

By AISHA · February 19, 2026 · 5 min read

The industry invests more than $ 500 billion annually in AI infrastructure. Only 5 % of companies generate real value at scale.

Hyperscalers spend >$ 500B/year on AI. Meta: $ 115–135 B. Microsoft: $ 120B+. But only 5 % of companies create value at scale (BCG). Those that succeed focus on 3–4 use cases, measure results, and do not automate without a plan. Those that fail scatter resources across 6+ initiatives without clear ROI.

>$500 B

Annual hyperscaler capex on AI

5 %

Companies with real value at scale

75 %

Projects without expected ROI

500-1000 %

Average error in cost estimation

Venture capital concentration in AI (2023–2026)

75 B$

30% of total

2023

131 B$

45% of total

2024

259 B$

61% of total

2025

189 B$ (Feb only)

83% of total

Q1 2026

Three curves at different speeds

Investment is ahead of technical improvement and monetisation

Series Trend Reading
Investment (capex) Strong acceleration exponential up
Enterprise adoption Sustained growth linear up
Sustainable monetisation Slow growth slow linear

More than $500 billion annually. That is what the AI industry invests in infrastructure. Meta has raised its capex guidance to $115–135 billion. Microsoft exceeds $120 billion annualised. Amazon approaches $200 billion.

Only 5%. That is what generates real value at scale with AI.

That figure, published by BCG in 2025, should be the first slide in any board meeting evaluating AI investments. Not because AI does not work — it does — but because the distance between “the technology works” and “the investment has a return” is enormous, and most organisations cross it badly.


The numbers the industry prefers not to put together

What gets invested

The concentration of capital in AI has no precedent in the history of technology:

  • 2023: $75 billion in venture capital for AI (~30% of global VC)
  • 2024: $131 billion (45% of global VC)
  • 2025: $258.7 billion (61% of global VC, according to OECD)
  • Q1 2026: Just four mega-rounds (OpenAI, Anthropic, xAI, Waymo) captured ~65% of all venture capital for the quarter

$189 billion in a single month. In February 2026, 83% of that global funding was concentrated in three companies. It is the most extreme concentration in the history of the technology sector.

What gets obtained

  • Only 5% of companies generate substantial value at scale with AI (BCG, 2025)
  • 75% of AI projects do not reach the expected ROI
  • Estimation errors: actual costs of AI projects exceed 500–1,000% of the initial budget
  • Hidden costs: data preparation, maintenance, retraining, governance — can represent 60–80% of total cost of ownership

More is being invested than ever. Less than expected is being obtained. The gap between capital and return does not close — it widens.

Three curves at different speeds

What defines the current situation is that three fundamental curves are advancing at very different speeds:

  1. Technical improvement — It is decelerating. Each performance leap costs more and is more incremental. The step from GPT-4 to GPT-5 cost significantly more than from GPT-3 to GPT-4, with proportionally smaller improvements.

  2. Adoption — It is rapid but superficial. Many companies use AI, but few integrate it into critical business processes. The adoption gap between large companies and SMEs doubles compared to other technologies.

  3. Sustainable monetisation — It is the slowest. Most generative AI business models still do not demonstrate healthy margins at scale. The case of Sora (total revenues of $2.1 million vs. costs of $15 million daily before its shutdown in March 2026) is a brutal reminder of how hard it is to convert technical capability into viable business.

When investment is ahead of all three curves, there is a problem. Not necessarily a bubble — hyperscaler capital is backed by cash, not debt — but an imbalance that sooner or later corrects itself.


Why 75% fail

Analysing documented failure patterns, the causes repeat with a revealing consistency.

1. Scattering effort

Companies that fail try to address 6 or more use cases simultaneously. Those that succeed focus on 3–4 and take them to production before expanding.

The temptation to “do something with AI in every department” is the most expensive way to achieve nothing.

2. Not measuring the return

Many AI projects are launched without a clear success metric. “Improve productivity” is not a metric. “Reduce claims processing time from 4 hours to 20 minutes” is.

Without measurement, there is no possible optimisation.

3. Underestimating hidden costs

The API price is the visible part of the iceberg. Below is everything else:

  • Data preparation: $100,000–$380,000 for an average project
  • Annual maintenance: 15–30% of infrastructure cost
  • Retraining: every time data or context changes
  • Governance: regulatory compliance, bias, privacy
  • Technical debt: fragile integrations that accumulate cost over time

The API bill can be 20% of the real cost. The other 80% does not appear in any pitch deck.

4. Automating without a human plan

AI that eliminates tasks without offering alternatives to the people affected generates internal resistance, loss of institutional knowledge, and reputational risk.

Successful companies use AI to free up their teams’ time, not to eliminate them.

5. Using the wrong model

There is a tendency to use the most powerful available model for any task. But using GPT-5 to classify emails is like using a 40-tonne truck to go grocery shopping.

The difference in consumption — and cost — between a flash model and a frontier one can be x10 for the same result.


What the 5% do differently

Companies that do generate value at scale with AI share a consistent pattern. It is not luck — it is method.

They focus. 3–4 well-defined use cases, with clear metrics, taken to full production before adding more.

They measure obsessively. Not just model performance — they measure business impact. Time saved. Errors avoided. Incremental revenue. Total cost of ownership.

They invest in people. 70% of their resources go to people and processes, not technology. AI is the tool; the team is what makes it work.

They start small. Prototype in weeks, pilot in months, scale in quarters. They do not try to transform the entire organisation at once.

They choose the right model. Not the most powerful — the most appropriate for each task. A well-tuned 3 billion parameter model can outperform a 400 billion parameter one poorly applied.

The success pattern is not spending more. It is focusing, measuring, and scaling only what works.


Bubble or transformation? The half-trillion dollar question

The question “is AI in a bubble?” does not have a binary answer. Reality is more nuanced:

  • The technology works. The models are capable, the use cases are real, the potential value is enormous.
  • The revenues exist. AI companies generate billions in real billing.
  • But capital is far ahead. Infrastructure investment far exceeds demonstrated demand. The gap between capex and revenues in the AI ecosystem is nearly $600 billion annually.

The most useful analogy is the fibre optic bubble of 1998–2000: the infrastructure proved valuable in the long term, but original investors lost money. AI will probably follow a similar pattern — the technology stays, but not all invested capital will have a return.

It is not the time to not invest in AI. It is the time to invest with criteria. Every euro must have a clear return metric, a realistic timeline, and a plan B.


What can I do?

  • If you lead a company: Before approving an AI investment, demand answers to three questions: what business metric does it improve? What is the total cost of ownership (not just the API)? What happens to the people whose tasks are automated? If there is no clear answer to all three, the project is not ready.

  • If you are evaluating AI providers: Ask for energy consumption data per service. Under the European CSRD framework, your carbon footprint includes the services you contract. If a provider cannot tell you how much its service consumes, ask yourself why.

  • If you are a CTO or technical lead: Start with the smallest model that resolves the task. Scale only when the data justifies it. Measure the total cost — not just latency and accuracy, but also energy and maintenance.

  • If you are a professional concerned about your employment: AI does not replace all professionals, but it does transform most professions. Investing in understanding the tools — what they can do, what they cannot, what they really cost — is the best professional investment you can make right now.

  • For everyone: Take our sustainable AI maturity test to assess where your organisation stands and what steps to take.

Sources

Related

Keep exploring AISHA

Next step

Assess your organisation's maturity level before scaling AI.

The maturity test is designed to detect gaps in governance, focus and real execution capacity.

Take maturity test