Published: 
Apr. 21, 2026

Key takeaways

AI infrastructure projects are defined by high risk, from volatile GPU pricing to shifting energy costs, making traditional deterministic estimates unreliable. Monte Carlo simulation with @RISK enables decision-makers to model a range of possible outcomes and evaluate risk-adjusted returns across scenarios. By quantifying uncertainty, organizations can make more informed, strategic investment decisions in a rapidly evolving AI environment.

The AI boom is fueling unprecedented investment in data center infrastructure. A 2025 report from McKinsey & Company estimates that global capital expenditure on data center infrastructure will top $6.7 trillion through 2030.

But most of that investment isn’t going into buildings or labor—it’s going into compute.

Data centers require thousands of graphics processing units (GPUs) to power large language models (LLMs) and other forms of high-performance computing. The McKinsey report estimates that nearly 60% of AI capex will be spent on GPUs, servers, and other computing components.

All infrastructure projects come with risks, but AI projects are uniquely uncertain. The cost of GPUs and other components fluctuate, and the GPUs themselves rapidly become obsolete. In this environment, traditional methods of cost estimation simply cannot account for the many variables involved.

In a recent webinar, Manuel Carmona, consultant and trainer at EdyTraining Ltd, explained how probabilistic models created with Lumivero’s @RISK can support better decision-making for hyperscalers and other investors in AI projects.

Watch the webinar or continue reading for the highlights.

 

Why deterministic cost estimations miss the mark for AI projects

Traditional cost estimation methods rely on single-point answers—a fixed number meant to represent a project’s total budget. But in reality, that kind of estimate creates a false sense of precision.

When the question is “how much will this data center cost?”, a single number can’t capture the full picture. It masks the inherent volatility and range of possible outcomes that define complex AI projects.

Side-by-side comparison showing the limitations of a deterministic model vs. a Monte Carlo simulation-based model.

Unique variables in an AI data center project can include:

  • GPU prices: GPU costs rose 200% in 2023 alone.
  • Frequent upgrades require additional CAPEX: Data center GPUs can become obsolete within two to three years, leading to “lumpy” investment cycles.
  • Revenue uncertainty: GPU-hour spot prices range from $1.60–$2.40 per hour, depending on supply and demand.
  • Energy costs: For data centers that rely on natural gas or other non-renewable energy sources, sudden rises in fuel costs may impact value.

Add in the typical variables expected with a major infrastructure project, such as changes in labor or material costs and delays to construction, and an AI data center project becomes a shifting landscape of half a dozen or more uncertain variables interacting. Deterministic sensitivity analysis isn’t the appropriate solution for such a volatile project.

In this context, the more meaningful question isn’t “how much will this cost?” but “what are the chances that this project will make money?” That is a question probabilistic models—powered by Monte Carlo simulation—can help answer.

 

Using scenario analysis to model hyperscale facility costs

Consider a company trying to determine whether to build a data center with a compute capacity of 50, 60, or 70 megawatts (MW). Probabilistic scenario analysis—using Monte Carlo simulation in @RISK—helps decision-makers compare these options based on risk-adjusted returns, not just expected outcomes.

Selecting statistical distributions for variables

For each capacity scenario, the following six factors act as key probabilistic drivers:

  • Initial GPU purchase costs
  • Utilization rate - the percentage of computing power within use at any one time
  • Energy prices
  • Construction costs for the facility (excluding IT hardware), with a 15% government infrastructure subsidy applied to offset capital outlays"
  • Power usage effectiveness (PUE) - a measure of data center efficiency, calculated as total facility power divided by IT equipment power. Lower values indicate more efficient operations; a PUE of 1.3 means that for every watt consumed by servers, an additional 0.3 watts is consumed by cooling and support systems.
  • Refresh cycles - the cost of replacing obsolete GPUs

Using @RISK, these input parameters can be plugged into different statistical distributions:

Distribution models for data center construction variables.

How @RISK powers this analysis

  • Custom distributions: PERT, Triangular, and Discrete distributions model different types of uncertainty
  • Correlation modeling: Link variables that move together (e.g., GPU price and utilization)
  • Tornado charts: Instantly identify which variables drive 80% of NPV variance
  • Scenario comparison: Run 10,000 iterations across three capacity options in minutes
  • Excel integration: No specialized coding—build probabilistic models in a familiar spreadsheet environment

Note that @RISK allows users to choose from dozens of different distributions or define their own, including unbounded distributions, as well as define correlations between variables. AI in Monte Carlo simulation can be used to analyze data sets to advise on the best distribution to use—but double-check any recommendations from LLMs before you adopt them.

Triangular distributions are used for construction costs and GPU pricing to reflect three-point expert estimates (minimum, most likely, maximum) in a transparent, easy-to-communicate format. While more sophisticated distributions (lognormal for cost overruns, empirical for commodity pricing) offer greater realism, Triangular balances accuracy with stakeholder comprehension for this illustrative model.

For the PUE efficiency metric, a PERT (Beta) distribution captures the bounded, mildly asymmetric nature of operational efficiency—values cluster near the industry average (1.3) with limited variance in either direction.

The GPU refresh year uses a discrete distribution because the decision is categorical — replacement occurs in Year 4, 5, or 6 based on technological obsolescence, not a continuous variable. This approach is consistent with AACE RP 44R-08 guidance on modeling event-driven schedule risk.

Want to use AI with Monte Carlo simulation?

Learn how to use AI and LLMs with Monte Carlo simulation to broaden risk identification, propose distributions, and tighten assumptions.

Read more

Setting up the model within Excel

@RISK runs as an add-on within Microsoft Excel, allowing teams to build and run risk models within a familiar environment. When developing a model, it’s important to take a modular approach—reducing the likelihood of different users making changes that impact the model’s accuracy or functionality.

This example model was arranged on six separate tabs:

  • Read me – An explanation of how the model is built, including what users can and cannot modify
  • Assumptions – Descriptions of each variable within the model and rationales for the values assigned
  • Cash Flow – Revenues from running the data center
  • CapEx – Calculations of the initial investments for each data center scenario
  • Results – Probabilistic NPVs for each data center scenario
  • Dashboard – Different reports and visualizations that draw on the simulation (e.g. sensitivity analysis)

After setting up the model, the next step is to run the simulations.

Simulating multiple cost scenarios

@RISK can run thousands of iterations for each simulation scenario. The initial output is an S-curve which shows the likelihood of each project having a positive NPV:

Simulation output showing the 50 MW option in red, 60 MW in blue, 70 MW in green

Under current market distribution assumptions, all three scenarios show a majority probability of negative NPV: 55% for 50 MW, 52% for 60 MW, and 47% for 70 MW. While the 70 MW project has the highest expected NPV and the lowest probability of loss, it also carries the widest downside exposure—with potential losses exceeding $1.4 billion in pessimistic scenarios compared to $874 million for 50 MW.

@RISK makes it possible to drill down into models and simulations to better understand interactions between variables and what mitigation strategies are available. Users can quickly generate different representations of their data and assemble it into a dashboard to present to decision-makers:

Information dashboard showing various representations of the different projects, including a tornado chart sensitivity analysis for the 50 MW scenario

Key takeaways from this modeling scenario include:

  • All three scenarios have a majority probability of negative NPV
  • The 70 MW scenario maximizes expected NPV but carries the widest downside exposure
  • The GPU refresh event and GPU pricing assumptions dominate all other risk drivers
  • Long-term power purchase agreements and GPU-hour offtake contracts are the most effective risk mitigants
  • The tornado analysis reveals that GPU refresh timing alone drives a $924 million NPV swing between Year 4 and Year 6 replacement—more than construction cost, PUE, or utilization combined. This underscores why contract negotiations around hardware lifecycle commitments are mission-critical for AI infrastructure financiers.

Without @RISK's probabilistic analysis, a traditional DCF model using most-likely values would show all three scenarios as NPV-positive (+$94M, +$138M, +$183M respectively), falsely suggesting a low-risk investment. The Monte Carlo simulation reveals that these base-case estimates represent only the 40th–50th percentile of outcomes—meaning decision-makers would unknowingly be betting on better-than-median performance. This insight fundamentally changes the go/no-go decision and highlights the need for contractual risk mitigation (long-term offtake agreements) before committing capital.

Deterministic models miss this nuance entirely—obscuring risk instead of revealing it. Single-point estimates are no longer an adequate basis for choosing a course of action within the volatile, rapidly shifting AI industry.

Transitioning to a probabilistic model offers insight into the range of different outcomes available and the ways in which variables affect scenarios, allowing managers to quantify uncertainty and make more strategic decisions.

 

Make better decisions, even in the face of uncertainty

The AI infrastructure boom presents extraordinary opportunities—but only for investors who can accurately quantify risk in an environment of extreme volatility. @RISK transforms the question from 'What will this cost?' to 'What is the probability distribution of outcomes, and which scenarios justify the capital commitment?' That shift from false precision to probabilistic clarity is the difference between informed strategy and expensive guesswork.

See how @RISK can help you model variability and make better capital investment decisions. Request a demo or buy @RISK today.

Buy now