Back to articles

Best practices

Abstract illustration of the balance between technology and humanity — digital particles forming an open hand

AI can be the best tool in history — if we use it well

AISHA manifesto: why we defend artificial intelligence and why we demand it be used responsibly

By AISHA · February 4, 2026 · 5 min read

Only 5 % of companies generate real value at scale with AI. 75 % of projects do not reach the expected ROI. And no provider publishes complete data on what it consumes.

AI works and has real transformative potential. But it is being deployed without transparency, without measuring its impact, and without plans for those left behind. AISHA exists to change that: measure before opining, optimise before eliminating, decide with systemic vision.

5 %

Companies generating value at scale with AI

75 %

Projects without expected ROI

92 M

Jobs displaced before 2030

0

Providers with complete consumption data

State of AI energy transparency (April 2026)

10 cases analysed

Published direct measurement

1

Google Gemini

Data without methodology

1

OpenAI ChatGPT

No data published

8

Anthropic Claude, Midjourney, Suno, Runway, Ideogram, Udio, OpenAI GPT-5, xAI Grok

Only 5% of companies investing in artificial intelligence generate real value at scale. 75% of projects do not reach the expected return. And meanwhile, the industry invests more than $500 billion annually in infrastructure, with no provider publishing complete and verifiable data on how much it consumes.

Those three facts summarise why AISHA exists.


We are not anti-AI. We are pro-conscious AI

It needs to be said upfront because the nuance matters: AISHA does not exist to stop artificial intelligence. It exists so that it works better, for more people, for longer.

AI well used does not dehumanise. It elevates humanity.

It allows a doctor to spend more time with patients and less on paperwork. A teacher to personalise teaching for each student. A researcher to explore thousands of hypotheses that a lifetime would not be enough to test. An entrepreneur with an idea and no team to build a functional prototype.

1,400 megatons of CO₂ annually. That is what AI applications in energy optimisation, climate prediction, and materials discovery could reduce by 2035 — between 3 and 4 times more than the emissions of all data centres in the world.

Those benefits are real, documented, and deserve to be amplified.


But there are three problems no one wants to see

1. What is consumed is not measured

Across the entire AI industry, there is only one single direct production measurement published: Google revealed that a median Gemini query consumes 0.24 Wh. It is the only verified figure.

  • OpenAI gave a number (0.34 Wh for ChatGPT) without publishing any methodology
  • Anthropic has published absolutely nothing
  • Midjourney, Suno, Runway: zero data

From 415 TWh to 945–1,580 TWh. That is how much global data centre consumption will grow between 2024 and 2030. It is equivalent to adding Japan’s electricity consumption to the global system.

The energy AI consumes is not a technical detail. It is a matter of planetary sustainability. And it is being deployed blindly.

2. The return does not justify the investment — yet

Hyperscalers invest more than $500 billion annually in AI infrastructure:

  • Meta: capex of $115–135 billion
  • Microsoft: more than $120 billion annualised
  • Amazon: approaching $200 billion

And yet, only 5% of companies manage to generate real value at scale with AI. The three curves — technical improvement, enterprise adoption, and sustainable monetisation — advance at very different speeds. And capital goes ahead of all of them.

3. Labour impact arrives without a transition plan

92 million jobs displaced before 2030. That is the World Economic Forum’s projection. The promise that “AI will create more jobs than it destroys” may be true in the long term, but in the short term, reskilling is much slower than automation.

AI is not a substitute for human beings. It is a productivity tool that should empower them. But that only happens when it is deployed with a plan.


Three principles for using AI well

At AISHA we operate under a framework of three principles that we apply to everything we do:

MEASURE before opining

Every recommendation is based on quantifiable data. If there is no data, the first recommendation is to establish a measurement system. You cannot optimise what you do not measure, and you cannot regulate what you do not know.

When Google published its measurement of 0.24 Wh per query, it did not lose market share. It gained credibility. The rest of the industry can do the same.

OPTIMISE before eliminating

Before recommending to forgo AI, all optimisation avenues must be exhausted: smaller models for simple tasks, greener infrastructure, process redesign, result caching.

x46. That is the consumption difference Bertazzini et al. found between the most efficient and least efficient image model. Model choice is not neutral.

DECIDE with systemic vision

Every decision about AI has environmental, economic, and social ramifications. A cost saving that destroys jobs without a reskilling plan is not optimisation — it is externalising cost to society.


What AISHA does, concretely

We are not an academic think tank or an AI news outlet. We are a platform for outreach, education, and research with a clear purpose: enabling people, companies, and regulators to make informed decisions about artificial intelligence.

  • Rigorous research. Reports based on cross-verification of multiple independent AI sources. When three AI systems developed by competing companies reach the same conclusions researching independently, those conclusions deserve attention.

  • Practical tools. An AI footprint calculator to know how much what you use consumes. A maturity test for companies. A most efficient model selector for each task. Free, open, useful.

  • Accessible content. Articles that explain the complex without oversimplifying it. Data in context. Diagnoses with solutions. Neither alarmism nor complacency.


A note on how we work

We use artificial intelligence to produce our content. Not a single AI: multiple models according to the task — Claude, Gemini, GPT, Codex, DeepSeek, and others. The right model for each job, exactly as we preach.

This is not a contradiction: it is coherence. And when we analyse AI providers — including the ones we use — we apply the same critical standard to all.

If we, a small project, can be transparent about how we generate our research, companies billing billions have no excuse for not being so about their products.


What can I do?

  • If you are an AI user: Start by knowing how much what you use consumes. Our footprint calculator gives you an estimate in 30 seconds. And remember: the smallest model that resolves your task is always the best choice.

  • If you lead a company: Before scaling an AI project, calculate its total cost — not just the API price, but also the energy, maintenance, and impact on your team. Under the European CSRD framework, your carbon footprint includes the AI services you contract.

  • If you are a developer: Choose flash/mini by default. Enable reasoning only when you need it. Cache results. Every architectural decision has an energy cost that multiplies by millions of users.

  • If you work in policy or regulation: Measurement is possible today, without new technology. The only thing missing is political will. And protecting employment in the transition is not optional — it is an obligation.

Sources

Related

Keep exploring AISHA

Next step

Don't miss any update.

Subscribe to the AISHA editorial newsletter to stay up to date with new pieces, reports and tools.

Go to newsletter