Given all the chatter recently around data centers and nuclear, I thought it’d be helpful to break out some of the nuances around power markets, an overview of recent deal structures, and the read through for broader nuclear development. The forecasted increase in electricity demand growth, primarily from AI data centers and associated cloud infrastructure, is introducing new dynamics to US power markets. This has led to several high-profile corporate investments in nuclear energy deals. However, it’s a misnomer to assume this means we jump back into a US nuclear renaissance.
Before we jump into nuances of the deals, I think it’s helpful to break out a few fundamental details around how the US power market is designed and works.
There are two market structures across the US electricity system (split into 3 major regions and then split into 20+ RTOs ). Wholesale electricity is managed through energy markets (short-term, real-time markets for actual power dispatch) and capacity markets (forward markets paying generators to commit to future availability). Some regions like PJM, NYISO and ISO‐NE run both day-ahead and real-time energy markets, plus multi-year capacity auctions to secure future peak supply. In a capacity auction, utilities forecast peak load (often three years ahead) and procure firm commitments (generation or demand-response) to meet that peak plus a reserve margin. These capacity payments help cover generators’ fixed costs, ensuring reliability. On the other hand, ERCOT (Texas) and CAISO (California) operate essentially energy-only markets: there is no centralized capacity auction. In essence, this means generators in ERCOT/CAISO earn revenue mainly by selling energy at high prices during tight conditions, whereas in PJM/ISO‐NE/etc. generators also earn capacity payments simply for being available.
This distinction matters for understanding current market dynamics and why tech companies are pursuing direct nuclear arrangements. Nuclear offers high capacity factor, clean baseload power but the capital cost is much more expensive than other plants (more to come below). Without having some type of capacity payment or PPA in place, it’s an extremely risky proposition to build a large plant especially as solar continually drives the marginal cost of producing each additional electron down near to 0 (e.g., solar has no fuel costs and is increasingly increasing its penetration rate on grids globally).
At a system level, planners maintain reserve margins (excess capacity above expected peak load) to avoid outages. For example, a 15% reserve margin means 15% more capacity than peak demand (e.g., I have 100 GW peak demand system, so I maintain 115 GW of capacity). It’s costly to maintain reserve margins but system operators are willing to invest to prevent blackouts or rolling brownouts. The result of the excess capacity that US grids are typically built to serve peak demand, not average demand: the majority of generating capacity sits idle most hours. For instance, natural gas “peaker plant” (fast-ramp) often run only a few percent (2–20%) of hours each year. As such, a handful of hours can account for a large fraction of annual costs.
The need to meet peaks has long defined grid investment. Utilities and RTOs add capacity so that even on the hottest afternoon (or coldest winter night), the lights stay on. However, this means that most of the time the grid has ample generation that goes unused. Real-time prices often collapse to very low levels in off-peak hours and power prices are set by the marginal cost producer (e.g., often natural gas but increasingly in high solar penetration grids this dynamic is changing depending on market), since generators bid near zero if they need to run at all. The fear is that reserve margins are going to fall in coming years if load growth materializes as forecasted, so we need to invest in more capacity. NERC shows a select group of RTO reserve margins forecasted through 2030 in its latest long-term reliability risk assessment.
One of the new load drivers is large-scale, 24/7 data centers (e.g., AI, cloud facilities). Unlike traditional loads, these data centers run constantly at high power, raising both the baseline and peak. Their demand is believed to be inelastic and around-the-clock, exacerbating the existing the peak problem. Today data centers consume on the order of 3–5% of U.S. electricity. That is projected to rise to 10–20% by 2030 depending on who you ask (see below). There is a wide range of opinions on the ultimate GW needed and TWh need that data centers will add (see below) which makes the how much capacity/margin question that much harder to answer.
As such, these data centers add continuous power demand (which improves baseload utilization), but they also raise the absolute peak load the grid must meet. The grid may have ample spare capacity during lull periods, but the higher peak (from combined cooling, lighting, manufacturing, data center loads, etc.) requires firm new generation or demand response. In practice, this means that serving big data center demand ultimately drives new capacity investment – even though the energy may be spread evenly, the extra capacity needed appears during the few high-demand hours.
Note: this is currently being debated across the ecosystem as many market participants including hyperscalers, utilities, market regulated entities, and academics are exploring the rise of ‘data center flexibility’ giving them the chance to interconnect sooner and actually support grid resilience (e.g., Duke claims up to ~100 GW of new loads could be integrated if data centers could be flexible for <5% of operational hours per year or EPRI’s DCFlex Initiative). The value of reliable but more expensive nuclear falls under scenarios where data centers can act more flexibly.
Tech companies have historically met growth with green power via renewables PPAs (synthetic transactions that tie the output/clean energy attributes from clean energy plants to offset the tech company’s annual energy usage from their local grid), but renewable PPAs but do not line up to hourly power use. A data center runs around the clock, whereas a solar/wind farm are intermittent (when the sun shines and winds blow). As a result, large cloud providers facing 24/7 loads are pursuing more direct arrangements, including nuclear, to get guaranteed baseline carbon-free power they can count on at every hour, especially in the face of impending grid scarcity. In a market that needs baseload, reliable, firm power, nuclear stands out as an intriguing option for tech hyperscalers.
Given the above dynamics, several tech firms have struck deals tying their demand to specific nuclear plants. Unlike typical “virtual” renewable PPAs (financial contracts tied to unspecified grid power offset against actual use form grid that includes non-clean electrons), these deals physically link the buyer to a specific nuclear facility. There are four broad deal types I’ve observed.
Deal Type | Example (Buyer/Generator) | Mechanism | Key Characteristics |
---|---|---|---|
Co-location for Existing Plant | 03/24: Amazon Web Services / Talen – Susquehanna | Build data center campus adjacent to nuclear plant; direct local interconnection | Uses existing plant output; bypasses some grid constraints; can be regulated as transmission service. Regulatory review is sensitive (FERC ultimately rejected an expansion of AWS’s Susquehanna offtake and the deal is in limbo awaiting direction from FERC/PJM). |
Plant Restart | 09/24: Microsoft / Constellation – Three Mile Island (Unit 1) | 20-year PPA funds restart of a mothballed reactor | Revives idle zero-carbon capacity (i.e., true additionality of electrons) but requires significant ~$1.6B capex. Buyer provides long-term revenue certainty, enabling relicensing and upgrades. |
New-build PPA for SMRs | 10/24: Google-Kairos Power, Amazon-X-Energy | Agreement to purchase output from plant once built | Supports first-of-a-kind small modular reactors w/ commercial agreement should they get built (e.g., likely contingent on plant getting built). High risk: no current generation and plants not yet built. Commits buyer to take output of proposed SMRs (e.g., up to 500 MW by 2035). |
Life Extension for Existing Plant | 03/25: Meta (Facebook) / Constellation – Clinton | 20-year PPA underwrites continued ops and relicensing | Provides steady cashflow to keep the nuclear plant open (Clinton adds uprates for +30 MW). |
Each approach goes beyond typical virtual PPAs, but you can see the varying degrees of impact of new electrons coming online. Outside the Microsoft-Three Mile Island and New Build SMRs (e.g., TBD if they ever get built), most of the other deals are extending the life of plants already online today [and there are only a handful of restart options given how far along plants are decommissioned]. Outside of Amazon-Susquehanna (e.g., co-location deal), Microsoft and Meta are not even directly getting the physical electron output from the nuclear plants they are providing the PPAs for [although allows them to better offset usage in the same region]. I draw attention to this these deal terms because it clearly illustrates while hype is high for nuclear, these deals are not evidence we are going to build new nuclear plants yet necessarily.
While corporate demand for nuclear output may be piqued, the underlying economics of building new plants remain difficult. The last U.S. nuclear plants broke ground decades ago, and recent projects have massively overrun budgets and schedules. Southern Company’s Vogtle units 3–4 in Georgia came online in 2023 at roughly $34–36 billion total cost – about 2.5x the original $14 billion estimate, and seven years late. It’s not just a US issue either. Similarly, the UK’s Hinkley Point C (two 1.6 GW EPR units) has been pushed back to ~2030 with costs now estimated at £40 billion, double the developer’s initial estimate. Another example? Finalnd’s 1.6GW Olkiluoto 3 nuclear power plant in Finland is was initially estimated at €3 billion and expected to come online in 2009 but the the final cost ballooned to around €11 billion and the plant only became operational in 2023. Prof. Bent Flyvbjerg’s research work on large scale infrastructure projects with cost overruns sees nuclear top the field.
Lazard’s analysis allows us to compare costs across generation types. On a LCOE basis for (not a perfect metric, but one that allows us to compare generation sources on a like-for-like basis), you can see nuclear isn’t as competitive relative to other US utility-scale options (e.g., solar, wind, gas) and is one of the only generation sources where costs have actually gone up. There is a reason no corporates or utilities are lining up to build new nuclear plants. No one is willing to do so without a backstop protection against cost overruns.
Advanced reactor advocates point to Small Modular Reactors (SMRs) as a way to fix nuclear’s challenges: engineered modules and standardized designs could theoretically cut construction costs and schedules. The U.S. NRC has certified an SMR design to date (NuScale), and utilities like [TVA recently announced an intention to submit SMR construction permit applications](https://www.utilitydive.com/news/tva-first-utility-small-modular-reactor-construction-permit/748734/#:~:text=The Tennessee Valley Authority on,River Nuclear site in Tennessee.). However, SMRs remain mostly conceptual in the US. No SMR is yet operational here (even construction permits are only now being processed). Globally, only a handful of SMRs have started up, and even they have faced delays and regulatory scrutiny. The below graphic is from TVA’s 2025 IRP that shows overnight capital costs and the nuclear projects are considerably more expensive than traditional alternatives.
The current reality is that SMRs are still an unproven technology platform. They face the same regulatory and construction challenges (plus license uncertainties) as large reactors. SMRs may offer a path forward over the next decade+, but for now they remain intriguing for long-term investment but not yet a near-term solution (and likely to cost the most and take the longest – see below chart). The first commercial SMRs (in the U.S. or elsewhere) will likely cost more and take longer than planners expect. Philosophically, they may never work given that the modularized design loses the benefits of the economies of scale of a large, centralized plants (hear Michael Cembalest break this down on Odd Lots recently).
Recent corporate nuclear deals are targeted solutions to specific problems for certain hyperscalers. For the broader US grid, most new demand is likely to be met with renewables plus storage, demand response, conventional thermal generation (e.g., gas CCGT), and bridge solutions (e.g., reciprocating engines, diesel, simple cycle turbines) until utilities can provide interconnect for new generation. Nuclear’s comparative advantages such as high capacity factor, low carbon footprint, and reliability are desirable to corporate hyperscalers but we’re still missing someone to fund new plant development and/or developers to prove they can deliver plants on reasonable timelines and on budget. Additionally, the value of high capacity factor baseload power may erode over time as grids roll out more storage and if data center workloads become more flexible.
Incremental update to the above post given the news today that Talen Energy (TLN) expanded their PPA with AMZN. TLN announced a new 1,920 MW PPA with Amazon at Susquehanna, expanding from the 960 MW total capacity in the original PPA which included an existing 300 MW BTM (“behind-the-meter”). Notably, FERC rejected Talen-AMZN’s expansion request to amend the BTM ISA up to 480 MW. AWS aims for full delivery of capacity under the new PPA by 2032 through the contract term in 2042. AMZN will keep 300 MW BTM in the interim and move to FTM (“front-of-the-meter”) after grid connection is established to reduce regulatory hurdles and transmission costs. Market estimates the PPA was struck in the ~$80s/MWh, similar to the Meta-Clinton deal and at a premium to prevailing market prices and power market forwards (20-30% premium). No immediate plans to build any new generation but PR calls out ‘exploring SMR builds’ (and why SMR names are up today) and ‘expanding output through uprates’ but stops short of any firm commitments to bring on new power.
Until now Amazon’s data-center campus beside the Susquehanna nuclear station could’ve drawn up to 300 MW through a private line that never touched PJM’s bulk transmission network. This type of configuration theoretically shields the co-located load (e.g., data center next to the nuclear plant) from PJM tariffs and interconnection studies on the broader system but there were repeated challenges from the Federal Energy Regulatory Commission (FERC), which questioned whether such ‘private wires’ shift system costs unfairly onto consumers and doesn’t adequately value the services the grid provides to co-located loads for backup services. Rather than sit in regulatory limbo and face a still-unresolved FERC docket on a proposed 480 MW expansion, the parties have chosen instead to convert the deal to a FTM arrangement, where all the Susquehanna nuclear output gets put onto the PJM grid and AMZN instead procures the output to offset their regional power use.
Key Takeaways Through Q&A: