Abstract illustration of a forensic digital inventory — data fragments floating in dark void, only a few illuminated

Only 10 Real Measurements of AI Energy Consumption Exist in the World

A forensic inventory of everything we know — and don't know — about the energy artificial intelligence consumes

By AISHA · February 26, 2026 · 6 min read

Of all the figures on AI energy consumption circulating in media, reports, and regulatory debates, only a tiny handful are real measurements. The rest are estimates with error margins ranging from x 2 to x 27.

Only one direct production measurement exists published by an AI provider: Google measured 0.24 Wh per Gemini query. OpenAI gave a number without methodology. The rest — Anthropic, Midjourney, Suno, Runway, xAI — have published nothing at all. Everything else you read about AI consumption is academic estimates or extrapolations.

1

Direct production measurement published by a provider

~ 10

Real measurements in total (academic + production)

0

Data published by Anthropic, Midjourney or Suno

x 27

Maximum error margin in third-party estimates

Quality of AI energy consumption data (April 2026)

20 cases analysed

Direct production measurement

1

Google Gemini

Corporate statement without methodology

1

OpenAI ChatGPT

Academic measurements on standardised hardware

5

AI Energy Score (Hugging Face), Bertazzini et al. (image), Luccioni (video), Passoni et al. (audio), ML.Energy (Michigan)

Point measurements of specific models

3

DeepSeek-R1 (33.6 Wh), o3 (39.2 Wh), SDXL (1.64 Wh)

Commercial providers with no data at all

10

Anthropic, Midjourney, Stability AI, Suno, Udio, ElevenLabs, Runway, Pika, xAI, MiniMax

Of all the figures on AI energy consumption circulating in media, academic reports, and regulatory debates, only a tiny handful are real measurements made in production environments. The rest — including virtually all data on GPT-5, Claude, Sora, Midjourney, DALL-E, Suno, and all commercial services — are estimates with error margins ranging from x2 to x27.

We have done what no one seems willing to do: a forensic inventory, data point by data point, of everything that has actually been measured. The result is sobering.


The only real data point: Google and its 0.24 Wh

In the entire artificial intelligence industry, there is one single direct production measurement published by an AI provider.

Google revealed in August 2025 that a median text query to Gemini Apps consumes 0.24 Wh and generates approximately 0.03 gCO₂e — using market-based accounting with clean energy certificates. The data was published in a reviewable paper (arXiv:2508.15734) with explicit methodology.

0.24 Wh. One number. One company. In an industry that moves more than $500 billion annually in infrastructure.

It is important to understand what this number covers and what it does not: it refers to median text queries. It does not include image generation with Imagen 3, it does not include video with Veo, it does not include Gemini Deep Research. It is a partial figure — but it is the only one that deserves the qualifier of real measurement.

In an industry that promises to transform the world, only one provider has publicly measured how much energy its main product consumes.


The unverified claim: OpenAI and its 0.34 Wh

Sam Altman stated in June 2025 that the average ChatGPT query consumes 0.34 Wh. He published it in a personal blog post, not a paper. He published no methodology. He did not define what constitutes an “average query”. He did not specify whether it includes images, Deep Research, or code interpreter.

The figure is plausible — it is in the same order of magnitude as Google’s — but it is not verifiable. This has practical importance: when a company that bills billions of dollars and has just launched GPT-5, GPT-5.4, Sora 2, and Codex does not publish a reproducible methodology, it is not offering transparency. It is offering a marketing statement.

And things get complicated when we look at the detail:

  • GPT-5 has a median estimate of ~18.9 Wh per query according to the URI AI Lab — 63 times the base reference
  • GPT-5.4 with active reasoning can reach 4–18 Wh per query
  • Sora 2 consumed ~1,000 Wh per 10-second clip before shutting down

None of these figures come from OpenAI. All are third-party estimates.


The academic measurements: 5 islands of rigour

Outside the providers, a handful of research groups have done what the industry refuses to do: actually measure.

1. AI Energy Score — Hugging Face

The most ambitious systematic measurement project. Hugging Face launched version 1 in February 2025 and v2 in December 2025. It has measured the real consumption of ~205 open-source models by running them on standardised hardware (NVIDIA H100).

The problem: it only measures open models. GPT-5, Claude, Gemini, Midjourney — the ones used by the vast majority of people — are excluded by design.

2. Bertazzini et al. — The hidden cost of the image

Published in 2025 (arXiv:2506.17016), this team measured 17 diffusion models for image generation on an RTX 4090. They found a difference of 46 times between the most efficient and the least efficient model.

x46. That means choosing the wrong model can multiply your consumption by almost 50 times to obtain an image of similar quality.

3. Luccioni & Delavande — The energy budget of video

They measured 7 video generation models on H100 (arXiv:2509.19222). Their data confirmed what the industry prefers not to say: generating video with AI consumes between x300 and x3,000 more than a text query.

4. Passoni et al. — The audio no one measures

The first serious study of audio generation consumption (arXiv:2505.07615). They found that Tango2 consumes ~2 Wh per 10-second clip and AudioLDM about ~0.25 Wh. These are the only reference data for the entire generative audio industry — Suno, Udio, and ElevenLabs have published absolutely nothing.

5. ML.Energy — University of Michigan

Continuous energy efficiency benchmark for machine learning models. Provides reference data for standard hardware, but — again — only for models they have access to, not for closed commercial services.


Point measurements: 3 isolated data points

Beyond the systematic studies, there are three direct measurements of specific models that serve as anchor references:

  • DeepSeek-R1 (long reasoning): 33.6 Wh per long query — direct measurement, high confidence. That is 112 times a simple text query.

  • OpenAI’s o3 (long reasoning): 39.2 Wh — direct measurement. 131 times the base reference.

  • Stability’s SDXL (image on H100): 1.64 Wh per image — benchmark measured by Hugging Face. The best open anchor point for image generation.

And one particularly revealing measurement:

  • Claude Code (median programming session): 41 Wh — measured by Simon P. Couch in January 2026. 137 times a simple query. A developer using code agents during a full workday consumes around 1,300 Wh — the equivalent of a dishwasher cycle.

The black hole: what we do NOT know

Now comes the uncomfortable part. This is what absolutely no one has published:

  • Anthropic (Claude, Claude Code): zero energy consumption data. Ever. The most recent environmental report does not include per-query telemetry.

  • Midjourney: zero data. Closed architecture, no public benchmarking.

  • Suno, Udio, ElevenLabs Music: zero data. The entire generative audio industry operates in complete opacity.

  • Runway, Pika, Kling, Hailuo: zero data. The video generators replacing Sora publish nothing about how much each clip consumes.

  • xAI (Grok): contested data. Its Colossus campus in Memphis runs on 35 unpermitted gas turbines, but they have not published per-inference consumption data.

  • Adobe (Firefly): has generated more than 24 billion assets but refuses to provide disaggregated consumption telemetry per image.

Imagine if the car industry sold vehicles without a fuel consumption label. If appliance manufacturers didn’t publish how much electricity they use. That is exactly what is happening with AI in 2026.


The numbers that should exist and don’t

To gauge the scale of the information vacuum, these are the data points the industry could publish today — because they have the telemetry to do so — and chooses not to publish:

  • Wh per query by type (text, image, video, audio, code)
  • Wh per session for autonomous agents
  • Total inference consumption per service per month/year
  • Water consumption for cooling per data centre
  • Real energy mix by service region (not the laundered global average)

Google proved that it is possible to publish this data without losing market share. When it revealed its 0.24 Wh, it did not lose users. It gained credibility.

Global data centre consumption will grow from 415 TWh in 2024 to between 945 and 1,580 TWh in 2030 according to the IEA. That is equivalent to adding Japan’s electricity consumption to the global system. And the majority of that growth will come from AI.

Making informed decisions about this scale of impact with only 10 real measurements is not difficult. It is impossible.


What can I do?

  • If you are an AI user: Demand transparency. When choosing a service, ask: does it publish consumption data? Our footprint calculator gives you an estimate based on what little is known, but it would be better if providers gave you real data.

  • If you lead a company: Under the European CSRD framework, your carbon footprint includes the AI services you contract (Scope 3). If your provider does not give you consumption data, you are flying blind in your sustainability reporting.

  • If you are a developer: Use models with published measurements when you can. Open-source models measured by AI Energy Score and ML.Energy give you real data. Closed commercial services give you promises.

  • If you work in regulation: The EU AI Act (Article 40) already provides for energy transparency requirements, but the standards will not be binding until August 2028. Measurement is possible today, without new technology. What is missing is legal obligation.

Sources

Related

Keep exploring AISHA

Next step

Don't miss any update.

Subscribe to the AISHA editorial newsletter to stay up to date with new pieces, reports and tools.

Go to newsletter