The University of Rhode Island’s AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT’s reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.
A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI’s GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.
I have an extreme dislike for OpenAI, Altman, and people like him, but the reasoning behind this article is just stuff some guy has pulled from his backside. There’s no facts here, it’s just “I believe XYX” with nothing to back it up.
We don’t need to make up nonsense about the LLM bubble. There’s plenty of valid enough criticisms as is.
By circulating a dumb figure like this, all you’re doing is granting OpenAI the power to come out and say “actually, it only uses X amount of power. We’re so great!”, where X is a figure that on its own would seem bad, but compared to this inflated figure sounds great. Don’t hand these shitty companies a marketing win.
I think AI power usage has an upside. No amount of hype can pay the light bill.
AI is either going to be the most valuable tech in history, or it’s going to be a giant pile of ash that used to be VC capital.
That capital was ash earlier this year. The latest $40 Billion-with-a-B financing round is just a temporary holdover until they can raise more fuel. And they already burned through Microsoft, who apparently got what they wanted and are all “see ya”.
Bit of a clickbait. We can’t really say it without more info.
But it’s important to point out that the lab’s test methodology is far from ideal.
The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware running it.
What we do know is that the price went down. So this could be a strong indication the model is, in fact, more energy efficient. At least a stronger indicator than response time.
And an LLM that you could run local on a flash drive will do most of what it can do.
I mean no not at all, but local LLMs are a less energy reckless way to use AI
Why not… for the ignorant such as myself?
AI models require a LOT of VRAM to run. Failing that they need some serious CPU power but it’ll be dog slow.
A consumer model that is only a small fraction of the capability of the latest ChatGPT model would require at least a $2,000+ graphics card, if not more than one.
Like I run a local LLM with a etc 5070TI and the best model I can run with that thing is good for like ingesting some text to generate tags and such but not a whole lot else.
How slow?
Loading up a website with flash and GIF I’m 90s dialup slow… Or worse?
Like make a query and then go make yourself a sandwich while it spits out a word every other second slow.
There are very small models that can run on mid range graphics cards and all, but it’s not something you’d look at and say “Yeah this does most of what chatGPT does”
I have a model running on a gtx 1660 and I use it with Hoarder to parse articles and create a handful a tags for them and it’s not… great at that.
Probably not a flash drive but you can get decent mileage out of 7b models that run on any old laptop for tasks like text generation, shortening or summarizing.
What do you use your usb drive llm for?
Can you give an example?
How the hell are they going to sustain the expense to power that? Setting aside the environmental catastrophe that this kind of “AI” entails, they’re just not very profitable.
Look at all the layoffs they’ve been able to implement with the mere threat that AI has taken their jobs. It’s very profitable, just not in a sustainable way. But sustainability isn’t the goal. Feudal state mindset in the populace is.
Not just”not profitable”, they don’t make any money at all. Loss only.
There’s such a huge gap between what I read about GPT-5 online, versus the overwhelmingly disappointing results I get from it for both coding and general questions.
I’m beginning to think we’re in the end stages of Dead Internet, where basically nothing you see online has any connection to reality.
People who fawn over generative AI haven’t tried to use it for more than 5 seconds. I wish it could run a ttrpg game for me or even just remember the details of its original prompt but its not even close.
The stock market is barely connected to reality and that is required to be updated every 3 months by every single company. Just imagine what the internet’s going to be like.
I don’t buy the research paper at all. Of course we have no idea what OpenAI does because they aren’t open at all, but Deepseek’s publish papers suggest it’s much more complex than 1 model per node… I think they recommended like a 576 GPU cluster, with a scheme to split experts.
That, and going by the really small active parameter count of gpt-oss, I bet the model is sparse as heck.
There’s no way the effective batch size is 8, it has to be waaay higher than that.
Isn’t this the back plot of the game, Rain World? With the slug cats and the depressed robots stuck on a decaying world when the sapient, organic species all left?
Tech hasn’t improved that much in the last in the last decade. All that’s happened is that more cores have been added. The single-thread speed of a CPU is stagnant.
My home PC consumes more power than my Pentium 3 consumed 25 years ago. All efficiency gains are lost to scaling for more processing power. All improvements in processing power are lost to shitty, bloated code.
We don’t have the tech for AI. We’re just scaling up to the electrical senand demand of a small country and pretending we have the tech for AI.
This is nonsense, an M1 runs many multiples faster and at much lower wattage.
Not even the ai tech itself is enough for ai
It’s the muscle car era: can’t make things more efficient to compete with Asia? MAKE IT BIGGER AND CONSUME MORE
that’s a lot. remember to add “-noai” to your google searches.
I’m just going to ignore the AI recommendations, let them burn money.
i don’t judge you for that. honestly it matters fuck all at this point
Or just use any other better search like Bing or duckduckgo. googol sucks and was never any good. Quit pushing ignorant garbage.
googol sucks and was never any good.
Ha! Kids these days.
duckduckgo yes, but … bing?
This is my weekly time to tell lemmings about Kagi, the search engine that does not shove LLM in your face (but still let’s you use it when you explicitly want it) and that you pay for with your money, not your data.
This bubble needs to pop, the sooner the better.

The last 6 to 12 months of open models has pretty clearly shown you can substantially better results with the same model size or the same results with smaller model size. Eg Llama 3. 1 405B being basically equal to Llama 3.3 70B or R1-0528 being substantially better than R1. The little information available about GPT 5 suggests it uses mixture of experts and dynamic routing to different models, both of which can reduce computation cost dramatically. Additionally, simplifying the model catalogue from 9ish(?) to 3, when combined with their enormous traffic, will mean higher utilization of batch runs. Fuller batches run more efficiently on a per query basis.
Basically they can’t know for sure.
40Wh or 18Wh which is it?
That’s my old gaming PC running a game for 2min42sec-6minutes … Roughly.
they vibe calculated it.
Doesn’t matter, their audience isn’t intetested in accuracy they only want more things to feel outraged about
They asked chatgpt 4 about chatgpt 5 power consumption.
The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware [they believe is] running it.











