There is an army of analysts out there, a quasi-industry, that attacks data streams like piranhas, ripping everything apart and, unlike piranhas, analyzing the living daylights out of them. There are people that spend days on end analyzing, for example, not just weekly statistics on weekly petroleum consumption but also the magnitude and vectors of the error bars and comparing those to the error bars on monthly data. It’s funny to see their social media apoplexy when some arcane bit of trivia-to-us-but-not-to-them exhibits a value that offends their expectations, like when Padd IV middle distillate stocks show a weekly change by an unexpected or potentially erroneous result.
This vast talent pool is amazing. Participants stress-test and high-grade data to an unbelievable degree, taking vast streams of public information and testing, poking, cross-referencing it so that markets and people can find trust and usefulness in complex data flows. Many of them post material portions of their analysis for free. Bless their little spreadsheet hearts.
On the other hand, an oddity currently exists in that galaxy, a hole that should not be. There is a potential energy story to knock the socks off all others, to kick ‘energy transition’ rambling to the gutter – the potential for AI’s power requirements to alter the energy landscape in huge and unforeseen ways. (Regular readers may have noticed my growing obsession with the topic; be forewarned that the obsession will not subside any time soon for reasons below.)
The link to the obsessive micro-analysts is kind of an anti-link, an absence – there aren’t any free-range analysts parsing AI power consumption data, because no one knows where to start.
The problem and challenge is simply unfolding too quickly and on a global basis to begin to quantify it. But thanks to our ability to see everything anywhere, we can at least catalogue the latest thoughts.
Before getting into the most recent thinking on AI power consumption, a reminder of how the next AI act appears to be unfolding.
As one example, Foxconn (world’s largest contract electronics manufacturer) and Nvidia (leading company in the AI chip market) recently announced a joint project to build ‘AI factories’: “Foxconn will integrate NVIDIA technology to develop a new class of data centers powering a wide range of applications — including digitalization of manufacturing and inspection workflows, development of AI-powered electric vehicle and robotics platforms, and a growing number of language-based generative AI services…enabling deployment of autonomous mobile robots that can travel several miles a day and industrial robots for assembling components, applying coatings, packaging and performing quality inspections. An AI factory with these NVIDIA platforms can give Foxconn the ability to accomplish AI training and inference, enhance factory workflows and run simulations…Simulating the entire robotics and automation pipeline from end to end…Foxconn is expected to build a large number of systems…for its global customer base, which is looking to create and operate their own AI factories.”
Nvidia’s CEO in a separate article described their vision, starting with electric vehicles: ‘ “AI factories” could continuously receive and process data from autonomous electric vehicles to make them smarter. “This car would of course go through life experience and collect more data. The data would go to the AI factory. The AI factory would improve the software and update the entire AI fleet,” said the Taiwan-born Huang. “In the future, every company, every industry, will have AI factories.” ‘
Business Insider had a good article on the Foxconn/Nvidia news including this summer-vintage quote from Blackstone investment Management’s Jonathan Gray: “There’s a well-publicized arms race happening in AI, and the major tech companies are expected to invest $1 trillion over the next five years in this area, mostly to data centers.”
So. Taken together and as part of the general tone of industrial chatter, where conference call transcripts of many industries are drenched in the mention of AI, it seems that AI development, and corresponding power requirements, are going to continue their near vertical ascent.
Are any of the actors worried about energy consumption?
The answer seems to be that collectively, the power supply challenge is a major problem, but at an individual/corporate level, the risk/problem is one of not participating in the AI boom – the risk of getting left behind. Thus, we have companies falling over themselves to not just invest in AI, but to invest in crushing AI, in AI that is bigger, better and faster than competitors.
The website Semiconductor Engineering (SE) posted a great article with some power consumption warning flags, and the only evident problem with the article is that it is a year old, and therefore does not even contemplate the crazy AI rush we’ve seen this year. Still, the view forward was prescient, and startling.
From the SE article, a clear and intelligent synopsis: “Machine learning is on track to consume all the energy being supplied, a model that is costly, inefficient, and unsustainable. To a large extent, this is because the field is new, exciting, and rapidly growing. It is being designed to break new ground in terms of accuracy or capability. Today, that means bigger models and larger training sets, which require exponential increases in processing capability and the consumption of vast amounts of power in data centers for both training and inference. In addition, smart devices are beginning to show up everywhere.”
The article quotes Tim Vehling, senior vice president for product and business development at Mythic: “When you look at what the hyperscaler companies are trying to do, they’re trying to get better and more accurate voice recognition, speech recognition, recommendation engines. It’s a monetary thing. The higher accuracy they can get, the more clients they can service, and they can generate more profitability… I don’t know if there’s any real motivation to optimize power in those applications.”
AI consumes power in a number of ways. The first is through training, the ongoing activity to make sure the AI tool includes all up-to-date information. Per the SE article again, some AI models of two years ago used 27 kilowatt hours to train, whereas today – a scant two years later – some comparable models use 500,000 kilowatt hours.
But training is only half the equation, or maybe even a smaller fragment than that. After training comes ‘inference’. Once an AI model is trained, it is then rolled out into countless devices, cars, etc., where the AI tool then takes in real world data and shapes reactions based on what is happening. This is the inference stage, and it might well dwarf the power suck of AI training, depending on where it is rolled out and the technology that comes in the years ahead. There are billions of devices in the “Internet of Things” (IoT) that could be part of this equation. For an individual company, incremental power consumption is not the concern, not when the prize is some shiny new gewgaw that dazzles the masses, like a fridge that knows way more than it should about you, and keeps learning every day.
The SE article concludes with a startling demand/expectation, one that is surely unparalleled in the modern technological development age [emphasis added]: “Models are getting larger in an attempt to gain more accuracy, but that trend must stop because the amount of power that it is consuming is going up disproportionately.”
How’s that for bizarre – a technology website dedicated to semiconductor engineering declaring that progress must stop. This is the same industry that has bragged for 30 years about Moore’s Law – that the speed and capability of computers will double every two years.
If this isn’t a living example of Frankenstein’s monster, I can’t imagine what is.
The risk (or maybe better called the perceived risk but it doesn’t really matter at this point) of being left behind in the AI race is going to dwarf any risk of emissions generation, or power consumption. As long as the profitability profile increases higher than the rate of power price increases, it will be game on. Worried about emissions? Hey China, can we build over there, if you can spare a little coal power? How about you, Bangladesh? India? All of the above?
Given that it is impossible to quantify just how much AI infrastructure over the next decade, and its efficiency levels, it is equally impossible to gauge total energy consumption. We do know, as has been clearly stated above, that ‘the bigger the better’ and that most industrial players (and probably in some instances governmental ones as well) now see AI as some sort of imperative, and to be competitive in the AI space will mean building in all the capability possible.
One company that is showing some forward thinking on the topic is Microsoft, who recently made headlines by looking to hire nuclear expertise. Of course, the media lazily misinterpreted the situation: Computerworld ran a story called “Microsoft’s data centers are going nuclear” which, at a glance – which is how we digest the info flow, mostly – implies that all is well, the AI/data center power suck will be taken care of via the wonder of nuclear power.
But the article quickly distances itself from the headline by making clear that all Microsoft has done is seek to hire a nuclear program manager to develop a roadmap to integrate small modular reactor (SMR) technology.
Then, as with every single other aspect of the ‘energy transition’, it is necessary to consult realists and experts who are actually and realistically charting how these things are going to go. A nuclear expert named David Turver did a timeline analysis on Substack and catalogued how, almost predictably, China and the rest of Asia have nuclear plans that completely dwarf that of North America/Europe (we are still in an all-out war between the likes of Greenpeace and other ‘environmentalists’ – either of whom control our political rulers at present – as to whether nuclear should be allowed at all), and that even as SMR technology becomes accepted as viable, the UK doesn’t expect to have any in operation until at least the mid 2030s, and no one in the west has any hope of functioning SMRs before 2030.
Recall from above that industry is expected to invest well over $1 trillion in AI data centers before 2027, and that the Internet of Things is exploding, and that YouTube/TikTok/Instagram will have catalogued another billion hours of guys fixing washing machines and kids de-nutting themselves on skateboards and influencers influencing whatever it is that is so beguiling to billions.
Circling back to our squadrons of analysts mentioned at the outset… consider those dedicated individuals that pick and parse and evaluate an endless stream of picayune data in order to add clarity to commodity or power or any energy market. Their efforts add clarity and transparency, and are of immense value. Yet their desks are about to be engulfed as though they are at the foot of Mount Vesuvius long ago; the minutiae of what they study is but a grape compared to a mountain, when AI starts getting to where it looks like it’s going. The incentives are spectacularly misaligned in a world that thinks it is going to materially dent or change the shape of energy in near future.
Terry Etam is a columnist with the BOE Report, a leading energy industry newsletter based in Calgary. He is the author of The End of Fossil Fuel Insanity. You can watch his Policy on the Frontier session from May 5, 2022 here.