Home > AI, Cloud & Data > Expert Contributor

No Power, No Party: Why AI Supremacy Depends on Energy, Not Chips

By Alfonso Velazquez - Kyndryl
Head of Data and AI

STORY INLINE POST

Alfonso Velazquez By Alfonso Velazquez | Head of Data and AI - Thu, 01/15/2026 - 08:00

share it

For the last decade, artificial intelligence has been framed as a semiconductor race. Governments subsidize chip manufacturing, companies fight for access to GPUs, and national strategies are built around silicon supply chains. Chips matter, but they are no longer the decisive factor. The real constraint shaping the future of AI is energy.

As AI systems scale, the industry is discovering a hard truth: compute can be procured relatively quickly, but power must be planned, built, permitted, and sustained. And that process is slow, regulated, and unforgiving. In the next phase of AI competition, supremacy will not be determined in chip factories, but in energy networks.

On the other hand, as humanity races toward artificial general intelligence by 2030, a new bottleneck is emerging: Where will the energy come from? Power grids are straining, data centers are becoming energy megaprojects, and nuclear is quietly moving from taboo to necessity.

An Energy-Intensive Industry

AI data centers are no longer just about IT infrastructure. They are among the most energy-hungry assets in the global economy.

According to the International Energy Agency, data centers consumed approximately 460 terawatt-hours (TWh) of electricity globally in 2022 about 2% of total global demand. With the rapid expansion of generative AI, that figure is projected to double by 2026, potentially exceeding 1,000TWh, comparable to the annual electricity consumption of Japan.

A single hyperscale AI data center can require 100 to 300 megawatts (MW) of continuous power, roughly equivalent to powering a midsized city. In several regions across the United States and Europe, new AI data center projects are being delayed or rejected not because of capital constraints or chip shortages, but because local grids cannot absorb additional load.

AI growth is now colliding with physical energy limits.

Structural Gap

Semiconductor manufacturing can scale in years. Power infrastructure often takes a decade.

New power plants, transmission lines, and substations face long permitting cycles, environmental scrutiny, and political resistance. Grid modernization is capital-intensive and geographically constrained. This creates a structural mismatch: AI demand grows exponentially, while energy supply grows linearly.

As a result, electricity, not compute, is becoming the binding constraint on AI expansion.

Cloud providers and AI leaders increasingly spend as much time securing long-term power purchase agreements as they do optimizing model architectures. Energy availability has become a gating factor for AI investment decisions.

AI Competitiveness Is Now Energy Competitiveness

This shift has major economic and geopolitical implications.

Countries and regions with modern, resilient power grids, excess generation capacity, reliable baseload energy, and predictable regulatory frameworks are gaining a decisive advantage in the AI race. Conversely, regions with aging grids or fragmented energy policy risk falling behind, even if they have access to advanced chips.

AI policy has effectively become energy policy.

Toward Nuclear Energy

As AI systems move toward continuous, large-scale deployment and ultimately toward general-purpose intelligence, incremental energy solutions are no longer sufficient. Renewables are essential, but they are intermittent. Natural gas provides flexibility, but not long-term decarbonization. Grid-scale storage is improving, but remains limited in duration and cost. None of these alone can reliably power always-on, high-density AI workloads.

This is why nuclear energy is rapidly re-entering the AI conversation.

AI data centers require 24/7, high-capacity, predictable baseload power. Training frontier models and running inference at global scale cannot pause when the sun sets or the wind slows. At AI scale, energy intermittency is not an inconvenience, it is an operational risk.

Nuclear power uniquely matches this profile.

From Taboo to Necessity

For years, nuclear energy was politically and socially sidelined. AI is changing that calculus. According to the US Department of Energy, small modular reactors (SMRs) can deliver 50 to 300MW per unit, closely aligning with the power needs of hyperscale AI campuses. Compared to traditional nuclear plants, SMRs offer:

  • Faster deployment timelines
  • Lower upfront capital per unit
  • Enhanced safety designs
  • Potential for on-site or near-site installation

Major technology companies are already exploring nuclear-backed strategies. Hyperscalers have publicly acknowledged that advanced AI at scale will require firm, carbon-free power sources beyond wind and solar. Several data center operators are actively evaluating nuclear options to secure long-term energy certainty.

This is not an ideological shift. It is pragmatic.

AGI Requires Energy Certainty, Not Optimism

Progress toward artificial general intelligence implies:

  • Orders-of-magnitude increases in compute
  • Persistent, global inference workloads
  • Mission-critical reliability

At that level, energy uncertainty becomes an existential constraint.

Nuclear plants operate at 90%+ capacity factors, compared to roughly 25–35% for solar and 30–45% for wind. No amount of software efficiency can compensate for missing electrons. AGI cannot run on best-effort power.

As AI becomes central to economic productivity, national security, and scientific progress, energy realism is replacing energy idealism. Governments are extending the life of existing nuclear plants, accelerating approvals, and funding next-generation reactor designs, explicitly citing data centers and AI-driven demand.

AI is not just consuming energy. It is reshaping energy strategy.

The Sustainability Paradox of AI

AI’s energy appetite raises legitimate concerns about sustainability. Training a single large language model can consume millions of kilowatt-hours, generating significant emissions if powered by fossil fuels.

Yet, AI is also becoming indispensable for optimizing energy systems:

  • Forecasting renewable generation
  • Balancing grids in real time
  • Reducing transmission losses
  • Improving efficiency across industries

The paradox is clear: AI stresses energy systems while simultaneously becoming one of the most powerful tools to optimize them. The outcome depends on design choices where data centers are built, how energy is sourced, and whether efficiency is prioritized alongside performance.

From Compute Efficiency to Energy Efficiency

The next frontier in AI is not just faster models, but energy-aware intelligence. Leading organizations are already shifting focus toward:

  • More output per kilowatt-hour
  • Energy-efficient model architectures
  • Geographic workload distribution based on power availability
  • Co-location of data centers with generation assets

In this paradigm, energy becomes a first-class design constraint, not an afterthought.

Redefining AI Supremacy

AI supremacy is often measured by model size, benchmark scores, or access to advanced chips. These metrics are increasingly incomplete.

True AI leadership will belong to those who can:

  • Sustain AI operations at scale
  • Absorb exponential demand without destabilizing infrastructure
  • Balance performance, cost, resilience, and sustainability

That capability is rooted in energy networks, not chip factories. The AI race has entered a new phase. Silicon still matters but power matters more.

Organizations and nations that continue to treat energy as a secondary concern will encounter hard limits sooner than expected. Those that recognize energy as the foundational layer of AI will shape the next decade of technological and economic leadership. In the age of artificial intelligence, keeping the lights on may be the most strategic decision of all.

No power. No party.

You May Like

Most popular

Newsletter