When building a manufacturing facility, a reliable and resilient power source is key. As social and governmental pressure to decarbonize the manufacturing sector intensifies, more companies are moving away from fossil fuel-powered plants – for example, a 2024 article in Automotive Manufacturing Solutions notes that major automobile manufacturers in Europe, Asia, and North America “continue to innovate and adopt green energy practices,” such as plants powered in whole or in part by solar, wind, and hydroelectric energy.
In addition to addressing sustainability, renewable energy can also result in more cost-effective operation in a time of global energy supply turbulence. A March 2024 article in the Engineering Science & Technology Journal observes that “[t]he integration of renewable energy into the manufacturing sector is not just a step towards mitigating environmental impact but also a strategic move that opens up a wealth of opportunities for innovation, competitiveness, and growth.”
However, making the switch to sustainable energy sources comes with its own set of risk factors. In a recent Lumivero webinar, “Illuminating Probabilistic Risk in Renewable Energy,” energy industry consultant Manuel Carmona walked attendees through modeling methods that can help evaluate the different types of operational and financial risks for renewable energy products in manufacturing. In this case study, we’ll discuss the highlights of Carmona’s presentation, define and describe how to use Monte Carlo simulation, and present the risk modeling examples used to make better decisions in the field of renewable energy.
Manuel Carmona is a certified Project Management Institute Risk Management Professional (PMI-RMP) with more than 25 years of experience in managing projects within the energy and technology sectors. As a trainer with EdyTraining, he has helped manufacturers utilize @RISK and XLSTAT to run Monte Carlo simulations using Excel – simulations that can be used for various types of probabilistic analysis.
Probabilistic analyses can be used to answer a wide range of questions raised at the outset of a renewable energy project, including:
To generate these analyses, Carmona recommends building models using @RISK by Lumivero, a probabilistic risk analysis tool that lets you create Monte Carlo simulations while using Excel spreadsheets.
Monte Carlo simulation is a statistical analysis technique first developed by scientists working on the Manhattan Project during World War II. It’s used to create probabilistic forecasts that account for risk and random chance within complex systems. Finance, meteorology, insurance and defense are just a few of the industry sectors that make use of Monte Carlo simulations to inform decision making.
Powered by software such as @RISK, Monte Carlo simulation can quickly generate thousands of simulations using random numbers that account for a wide range of variables, generating many different outcomes along with the probability of their occurrence.
Creating probabilistic analysis models with a Monte Carlo add-in for Microsoft Excel is typically a simple process that generates a complete range of possible values as opposed to traditional deterministic modelling techniques.
Most analysts use single-point estimates (also known as mean values or most likely values) for their estimations, then perform a series of best- and worst-case scenario calculations using formulas to determine the impact of a specific variable on a project.
For example, an analyst might begin their calculations by setting the cost of building an energy plant as high as estimates indicate it will go, generate an output, and then work in increments to gradually define potential impacts of a project. Manually adjusting the parameters for each calculation allows for refinement of the outcomes, but it cannot produce a complete range of potential outcomes.
With Monte Carlo simulation, analysts can develop comprehensive risk analyses more quickly – analyses that project risk into the future to determine whether an investment is worth making. These analyses can also be adjusted to model many different types of risk or uncertainty including cost assessments across the life of a project.
With @RISK, project managers can build models and run thousands of simulations under different scenarios – allowing them to quickly model probabilities across a wide range of variables. Plus, the interface allows for rapid generation of graphics that help stakeholders visualize findings. Options include tornado graphs showing advanced sensitivity analysis, stress testing, scatter plots, and more.
Carmona notes that because @RISK integrates with Microsoft Excel, creating a probabilistic analysis is as simple as selecting any other Excel pre-programmed function – making the creation of models a straightforward process. By integrating these various uncertainties into a comprehensive @RISK model, project managers can perform Monte Carlo simulations, running thousands of iterations to assess a project's financial performance under different conditions and scenarios.
This approach provides valuable insights to project stakeholders into the range of possible outcomes, the probability of meeting certain financial targets, and the identification of critical risk factors that may significantly impact the project's success and objectives.
Carmona demonstrated how @RISK could be used to analyze uncertainties and costs for building a renewable power plant for a manufacturing facility. The model plant would utilize solar panels and wind turbines to generate energy and would need to reliably produce eight to 12 megawatts (mWh) of energy per day.
For the purposes of this exercise, Carmona assumed that the plant was well-sited and that its solar panels and turbines were appropriately sized. The first question to answer was: based on probabilistic analysis, how much power would the plant usually generate in a given day?
To begin answering this question, it was necessary to develop models that incorporated different types of uncertainty. The analysis began by looking at the solar plant. Three variables that could impact energy generation include:
Using power output data from the solar panel manufacturer and weather data for the city in which the plant was to be built (Madrid, Spain), Carmona used @RISK to generate a distribution curve for power output based on solar irradiation. On a completely cloudless day, the plant could be expected to produce 12–13 mWh of power during daylight hours. Given typical weather conditions at the site, what would the power output of the plant most likely be?
Carmona used the @RISK add-in to simulate a dynamic environment with cloud cover that changed throughout the day with Monte Carlo simulation using random variables. Before running the simulation, he defined the cloud cover using a standard distribution. This required some adjustment to ensure that the model did not generate cloud cover values greater than 100%.
Cloud cover was not the only variable to account for, however. Temperature impact the power output of a solar cell as well – the higher the temperatures, the lower the output. While a perfectly cloudless day should result in maximum power output, an exceptionally hot day can actually impair generation. The model therefore needed to account for temperature correction.
Since @RISK utilizes the Latin hypercube method, a statistical sampling technique that cuts down on computer processing time, Carmona was able to quickly run 100,000 Monte Carlo simulations. A plot of the results was generated, resulting in a probability distribution curve for the daily production of electricity. The simulation projects that on 47.9% of days, energy production would be within the desired range between 11 and 13 mWh.
The next variable to account for was equipment. Power generated by renewable energy means still needs to travel from where it is generated to the facility it needs to power, and sometimes external equipment, such as electrical transformers, may fail. There are also other types of failure events to consider such as damage to solar panels from hailstorms, module soiling, and vegetation overgrowth. The next stage of the probabilistic analysis process was to model for these types of external failure events.
Carmona allowed for four failure events – one that happens every 250 days of operation, one that happens every 500 days, 750 days, or 1,000 days. In every instance, a failure event means that the plant does not provide any activity for the rest of the day.
Using @RISK, Carmona ran a second series of simulations that accounted for external failure events, then generated a new distribution graph to show how these new variables could further impact the probability that the plant would produce enough power on a given day. The red bars indicate the original simulation; the blue the second simulation. This second visualization clearly shows that failure events were likely to reduce the probability of the plant producing enough energy by 3%.
The model shows that on most days the solar plant would be able to produce around 9 MWh of power on most days.
So far, the probabilistic models indicated that the solar plant would not be able to produce enough electricity to meet demand on most days. However, adding wind generation would allow the factory to charge storage overnight when the plant was not producing.
Modeling the power outputs for the wind plants followed a similar process to modeling solar output. This process included gathering weather data about average wind speeds in the area where the plant was to be built and manufacturer data about how much power the turbines generate at a given wind speed. The chart below shows what percentage of output the wind turbines generate (Y-axis) given a specific wind speed (X-axis). Note that faster wind speeds will actually hinder power generation after a certain point, just as a day that is sunny but also very hot will reduce the effectiveness of solar panels.
After running simulations for the wind plant, Carmona was able to demonstrate that the combination of both generation methods – wind and solar – had a high probability of meeting electricity demand on most days.
That accounts for the actual power generation issues. What about the costs of operating and maintaining the plant?
Carmona decided to conduct a NPV analysis comparing the lifetime cost of operating the renewable energy plant with performance-monitoring software involved. Without the monitoring software, the plant performance would be lower than with the monitoring software. With the software, the plant would produce approximately 6% more energy. Does licensing and operating the monitoring software result in actual saving over time?
The table below was used to generate a 13-year forecast that also accounted for estimated plant inspection and maintenance costs, which would take place every three to four years.
Then, Monte Carlo analysis was performed to generate an average NPV. This showed that the average NPV of the plant without the monitoring software would be €134,000, while the average NPV with the monitoring software would be approximately €169,000 over the same period.
The result: running the plant with monitoring software would result in an average savings of €35,300.
What about the risks and costs involved with building the plant? Fortunately, DecisionTools Suite’s ScheduleRiskAnalysis allows project managers to assess time and cost uncertainty. The program can import details of projects that have been scheduled in either of two popular project management tools: Microsoft Project or Primavera P6 from Oracle. Project managers can use @RISK to import their project schedules into Excel and carry out Monte Carlo simulation to determine the impact of construction delays or cost overruns.
For renewable energy projects, @RISK empowers project managers and decision makers to make informed choices by generating Monte Carlo distributions in Excel. From determining power output to evaluating the value of investing in add-ons like monitoring software, @RISK can help you develop robust probabilistic analyses for a variety of risks and provide clear visualizations of results.
Find out how you can generate better risk analyses for your renewable energy projects – or any other projects – within Microsoft Excel. Request a free trial of @RISK today. You can also watch the full webinar on-demand!
SOURCES
Energy Risk Modelling, Roy Nersesian.
Manuel Carmona, Edytraining Ltd.
Researchers used @RISK and PrecisionTree to model the likelihood of a successful evacuation during a volcano eruption.
University of Bristol’s Environmental Risk Research Centre (BRISK) adds new dimension to modeling volcanic risk in Guatemala.
Conducting a quantitative risk assessment is often a difficult process, requiring data that is sparse or even unobtainable. With volcanoes, the effects of uncertainty are accentuated by the potentially high costs of making a wrong call.
Guatemala has many active volcanoes, but none are as close to large populations as the ‘Volcán de Fuego’, potentially one of the most dangerous volcanoes in Central America with a large population surrounding it. Many farmers live and work in its shadow due to the fertile slopes that provide the best ground for coffee growing in the region. Large eruptions in 1974 fortuitously did not lead to any deaths, but buried in the volcano’s geological history are signs of ominous behavior.
The volcano has been very active over the last few years with many small eruptions taking place every day, and the fear that this activity could suggest the build up towards larger eruptions in the future is a worrying prospect. The “Instituto Nacional de Sismologia, Vulcanologia, Meteorologia e Hidrologia” (INSIVUMEH), regularly monitors activity at the volcano, however, despite the gallant efforts of the scientists there, no formalized risk assessments are carried out, mostly due to lack of funding and resources.
Recent work using Lumivero's (previously Palisade) The DecisionTools Suite however, is now enabling volcanologists to quantify the nature of one of the threats from the volcano to peoples’ lives. As an integrated set of programs for risk analysis and decision making under uncertainty, The DecisionTools Suite running in Microsoft Excel, allows access to Monte Carlo simulation and other advanced analytics quickly and simply on the desktop.
Conventional risk assessments attempt to model the probability of a hazard and combine that with the vulnerability of the population, to create societal risk curves and estimated values of Individual Risk per Annum (IRPA). For many of the people living on the slopes and indeed the authorities, knowing the potential number of deaths or cost from an eruption is not entirely useful, as little planning control or mitigation can be carried out. In an attempt to increase the usefulness of the risk modeling to the end-user (the authorities and people living near the volcano), BRISK has looked at the vulnerability in a different way.
Normally volcanic risk assessments assume that the whole population is present in a location when a hazard hits. However, new work by BRISK has modeled the likelihood of a successful evacuation, using both @RISK and PrecisionTree, by inputting several variables obtained through a process of structured expert judgment. These variables, which include the time taken between a possible eruption and a possible hazard hitting a location, along with communication times from authorities and evacuation times, are each estimated with an uncertainty distribution by the experts. These expert views are then weighted and pooled together. The variables are then constructed together in a logic tree within PrecisionTree, with the end node either being evacuation or no evacuation – and the probability of these outcomes being quantified, with their uncertainties. When fed back into the @RISK (Hazard * Vulnerability) model, the effects of a potential evacuation on the risk is very clear.
When looking in more detail at the model outputs from the logic tree, it became clear where the sensitivities were within the system. For example, it may be for a given location that the amount of time between a warning and the hazard hitting is crucial, or it may be that the time taken to evacuate is crucial. This new way of modeling volcanic risk informs better planning and more effective mitigation strategies.
Jonathan Stone, a researcher at the University of Bristol, working with colleagues Prof Willy Aspinall and Dr Matt Watson, said “DecisionTools Suite has proved to be invaluable in the work we are doing with INSIVUMEH, and potentially very useful for those living and working around Volcán de Fuego.”
Professor Willy Aspinall has been using @RISK software for some time in his work analyzing the risk of volcanic eruptions and earthquakes around the globe.
Originally published: Dec. 5, 2020
Updated: June 7, 2024
With more than a trillion dollars risked worldwide on sports gambling every year, there certainly is interest—on behalf of bettors, at least—to turn the odds away from bookmakers. DePaul University professor Clayton Graham drafted and entered a research submission entitled, “Diamonds on the Line: Profits through Investment Gaming,” for the Ninth Annual MIT Sloan Sports Analytics Conference in February 2015. The prize winning paper, which topped the “Business of Sports” track and was in the elite “Final Four,” utilized Lumivero’s (previously Palisade) DecisionTools Suite in determining critical probabilities to create such a model for baseball wagering.
Clayton Graham is an Adjunct Professor at DePaul University’s Driehaus College of Business and a Management Consultant with Advanced Analytics LLC. His professional focus is applying statistics and mathematical modelling to analyze, evaluate and develop operational strategies and tactics. Clients include school districts, universities, major corporations, sports entities, and governmental agencies.
The MIT Sloan Sports Analytics Conference provides an annual forum for over 3,000 executives, leading researchers, and students to discuss the increasing role of analytics in the global sports industry. Graham’s research sought to calculate a team’s probability of winning an individual baseball game, and the economic consequences of each wager based upon the game’s betting line. Additionally, he attempted to determine the optimal bet size, subject to the risk tolerances of the investor.
The research required five critical steps:
This initial step required a pragmatic selection of input and output variables. First, Graham considered a very fundamental question: What is the batter’s purpose? Simply, it is to get on base and drive other runners around to score. Those inputs include the proportional measures of singles, doubles, triples, home runs and bases-on-balls. While there are other inputs such as hit-by-pitch, dropped third strike by catcher, etc., Graham found those situations to be of little significance in the prediction mode. At the heart of the model is the history of each batter/pitcher match up.
It is worth noting that Graham wanted a more accurate representation of scoring beyond runs per game, and focused, instead, on runs per out. His reasoning was that not all baseball games have the same number of innings (outs). If the home team is ahead after the visiting team completes their half of the ninth inning, the game is over after completing just 24 outs, as opposed to a full game consisting of 27. Should a game go into extra innings, the number of outs will exceed 27.
Once the runs per out were determined, Graham then applied a “park factor” adjustment, which measured the difference between runs scored in a team’s home park and road games. To equalize the influence of the park factors between competing teams, expected value of run production needed to be scaled upward or downward. Since the starting pitcher and batters likely played in all fields previously, the park factor was a scaled convex combination of the two parks’ factors.
Graham also determined that the average runs per out during the last three innings of a typical game equalled 91 percent of the runs tallied in the first six innings. Once he had the ability to calculate the potential run output, he could apply that data to the betting line of each game.
John Nierwinski
AMSAA Mathematician and Statistician
Popular American sports, such as basketball and football, utilize a point spread that determines how many points one team is favored over the other in a given contest. For example, if the Boston Celtics are 3-point favorites over the New York Knicks, a bettor wishing to place a wager on the Celtics would need the team to win by at least four points to win the bet. If the Celtics win by three points, the bettor’s initial wager is returned and if they win by less than three points or lose the game, the bettor loses the wager.
However, baseball’s “Money Line” is different than the “Spread” commonly used in basketball and football. Consider the following betting line if the Detroit Tigers hosted the Seattle Mariners:
*Seattle -113 *Detroit 105
In this scenario, Seattle, by virtue of having the lower, value is the favored team. The bettor would risk $113 to win $100. In contrast, betting $100 on Detroit would return $105.
Using formulas based on the batter-pitcher matchups, Graham determined the probability that either team would win the game. Then, Graham applied those probability functions to economic outcomes of a sports gaming investment. Two measures were derived; the first is the standard expected value of return on investment (EVROI), which yields either a positive or negative number. The second is the “edge”, which is simply the difference between the probability of winning and the implied probability of winning from the betting line. With these, one has the two principal elements for investment: Probability of winning and competitive edge.
Next, Graham utilized the probability of winning and the edge to derive the level of investment (percent of Bankroll) subject to investor-imposed risk tolerances. This significantly limited the number of games worthy of investment.
Once the model was complete, an initial bankroll of $1,000 was used to place wagers (about two per day) on Major League Baseball games, beginning on June 16, 2014 and through the conclusion of the World Series on October 29, 2014. Key results included:
*Only 23 percent of MLB games warranted a wager, based on probability of winning and the edge. *Games with a favorable determination won 68 percent of the time. *Wagers resulted in a 35 percent return on daily capital put at risk (bet). *The initial $1,000 investment grew by more than 1,400 percent during the season (a profit of $14,252).
“[Lumivero's] DecisionTools Suite was invaluable to the success of this exciting project, as it quickly and easily computed the myriad statistical scenarios,” said Graham. “Baseball has a seemingly infinite set of possibilities with each at-bat, and the intricacies of determining what may happen would be impossible to determine manually, with any degree of expediency. DecisionTools Suite is also very easy to use and intuitive because it operates in Microsoft Excel. I can say, without hesitation, that this project would not have been possible without DecisionTools Suite and the technical support [Lumivero] offers.”
Originally Published: Oct. 28, 2022
Updated: May 2, 2024
Pharmaceutical companies can use PrecisionTree (part of the DecisionTools Suite that creates multi-phase decision trees) to investigate the best clinical trial sequencing pathway for a novel drug in pursuit of multiple potential diseases (indications). They can then conduct sensitivity analysis to the dominant decision, determine the maximum cost of an initial proof-of-concept (POC) study, and then communicate findings to executives. This case study analyzed three different strategic alternatives (pathways) for indication sequencing for an anti-inflammatory drug in pursuit of 3 diseases (asthma; inflammatory bowel disease (IBD); lupus erythematosus (LE)):
Drug development is an expensive process that takes place over very long timeframes with high technical risk and commercial uncertainty. Advancing from the drug discovery phase through clinical trials to Food and Drug Administration (FDA) or European Union (EU) approval can take more than a decade and cost hundreds of millions of dollars.
Most modern compounds can be used as therapies for different diseases, and companies must decide on the order in which they will conduct clinical trials for the different diseases a drug may treat. This process, known as indication sequencing, can influence a company’s direction and future success.
Using PrecisionTree to create decisions trees that model the probability of phase-specific success a drug will encounter with treating different diseases and then conducting a sensitivity analysis to determine how far from those probabilities an option can deviate before altering the dominant decision can help companies break down the complexity and uncertainty of this decision at each stage of drug development and make informed choices.
Background
Dr. Richard Bayney is President and Founder of Project & Portfolio Value Creation (PPVC), a consulting boutique that provides services, training, and education in strategic planning, decision analysis, and portfolio management. He has more than 20 years of experience in Decision Analysis and Portfolio Management in the pharmaceutical industry with roles at Johnson & Johnson Pharmaceutical Research and Development, Bristol-Myers Squibb, Bayer, and Merck & Co.
He uses PrecisionTree and @RISK, Lumivero solutions contained within the Decision Tools Suite, to help pharmaceutical clients make informed decisions about indication sequencing and drug development.
Using PrecisionTree
When a pharmaceutical company is investigating anti-inflammatory compounds that can treat multiple indications PrecisionTree can help create multiple decision pathways that map out possible strategies for clinical trials, the likelihood of success of each strategy, and the estimated return on investment for each alternative. Three pathways are considered: testing the drug compound for effectiveness in treating asthma first via a Proof Of Concept (POC) trial, testing for IBD efficacy first with its own POC, or testing for LE efficacy first with its own POC.
The decision tree diagrams shown here indicate the following:
Once a dominant option has been identified, a sensitivity analysis can help interrogate the robustness of the analysis and determine any breakpoint—e.g. how low a phase-specific judgmental probability of success can go before the dominant decision changes.
In the PrecisionTree model for the IBD-first sequence above, the model shows a 70% chance that the drug under investigation will be successful in treating IBD during the POC phase. The illustration below shows how low that percentage would need to drop before one of the other pathways became a better option. The expected value of the IBD-first sequence only drops below the expected values of the other two decision pathways when the probability of success of the IBD POC phase is lower than 45%.
Strategy Region of Decision Tree “IND SEQ 1.0 (2) (4)’. Sensitivity analysis to IBD POC probability of success (POS).
This process can also be used to conduct any two-way sensitivity analysis, e.g. looking at the potential breakpoints for the combined probability of success of the IBD POC and the follow-up phase, Phase III, that involves testing the drug for statistical effectiveness against asthma as well. This generates a different strategy region visualization that allows for identification of breakpoints based on two variables.
PrecisionTree makes it possible to test the conditional probability of success based on prior information in a process known as Bayesian Revision — a statistical calculation based on a theorem developed by the 18th-century mathematician William Bayes. In clinical practice, attending physicians routinely prescribe diagnostic assessments to revise their prior judgments that patients displaying symptomatic illness are afflicted by one condition or another. Based on the historical sensitivity and specificity of such diagnostic techniques, they will then revise their clinical judgments that may result in altered treatment.
In the case of pharmaceutical indication sequencing, Bayesian Revision can help determine how much to pay for the initial POC study for the IBD decision sequence. In this case, a prior judgment (prior probability) of Phase 3 success is estimated at 20% (bottom arm of decision tree) but is revised (posterior probability) to almost 37% if the IBD POC study is successful with its own judgmental probability of success of 38% (upper arm of decision tree).
Expected Value of Sample Information – determining how much to pay for a POC study that is informative but imperfect (ll).
The illustration above shows the expected values for a Phase III trial result conducted with a POC study (upper decision node) and the expected values for a Phase III trial result without a POC study (lower decision node). The expected value of sample information (ESVI) can be determined by subtracting the value of the lower node from the upper node. In this case, the result is $72 million.
PrecisionTree identified the dominant indication sequencing strategy, provided a sensitivity analysis across multiple parameters, and even allowed for an estimate of the maximum amount the drug developer should be willing to pay for a POC study.
PrecisionTree has allowed Dr. Bayney to use decision trees in Excel to decompose the complexity of indication sequencing phase by phase. By conducting probabilistic analysis for different indication sequences, determining breakpoints through sensitivity analysis, and estimating the costs of conducting an initial POC study, PrecisionTree makes it possible to offer pharmaceutical executives the information they need to make the best possible decisions under conditions of high risk and uncertainty when scheduling clinical trials that will impact a company’s direction for years.
To get started fast, download our free example model, Portfolio Evaluation of Multi-phase Project and a free 15-day trial of DecisionTools Suite. This interactive example model looks at cash flow of projects with multiple phases such as those typically found in the pharmacological industry.
Interested in learning more about how PrecisionTree, included in DecisionTools Suite, can help guide your pharmaceutical indication sequencing? Request a free demo today!
You can also join the Lumivero community to connect with current PrecisionTree users and discuss how they’ve used it to improve their decision making.
From the Channel Island fox to an African antelope called the gerenuk, it is a diverse group of endangered animals that Palisade’s @RISK and PrecisionTree® are working to protect. The popular tools are being introduced to scientists and veterinarians by the IUCN-World Conservation Union’s Conservation Breeding Specialist Group. They are used in the group’s ongoing workshops to help conservationists identify and analyze the risks of disease in their efforts to protect endangered animals.
What makes the group’s use of @RISK and PrecisionTree unusual is that, while many businesses and researchers use Palisade software to deal with mountains of information, the conservationists use it to compensate for a lack of information. Disease transmission is a tricky business with many unknowns, and there is a sad lack of solid data on the diseases that occur naturally in wild populations. Add to all this the fact that many programs to rescue struggling species—reintroduction, captive breeding, creation of habitat corridors—involve moving surviving animals or transporting tissues for reproductive purposes. These programs, though well-intentioned, invariably expose the animals to additional risks of disease—uncertainties that would derail traditional analyses. But @RISK is perfectly suited to deal with them.
Dr. Douglas Armstrong of Omaha’s Henry Doorly Zoo is one of the leaders of the Conservation Breeding Specialist Group’s workshop series. He says there are hundreds of conservation projects that need to account for the risks of disease. These risks have a great many sources, including naturally occurring diseases, diseases introduced by domestic animals, and immune system vulnerability in animals that are relocated in new surroundings. Other factors compound the risks, like fragmentation and resulting isolation of habitat, catastrophic climatic change, and loss of genetic diversity. These are just a few, and each of the risks varies in probability depending on the situation of the specific animal population. All of this makes for tricky modeling.
“What we needed,“ says Armstrong, “was a set of practical, comprehensive, broadly applicable tools to anticipate what the problems might be. I think we have that now, and @RISK and PrecisionTree are really key components in that set of tools. They are easy to learn, and they allow us to enter reasonable ranges of possible values rather than fixed estimates. It’s an effective approach to minimizing the risks of disease in our efforts to conserve endangered species.”
Based in Reykjavík, Iceland, Enex provides renewable energy services, specialising in the development of geothermal power plants to generate electricity and provide district heating. The company searches for opportunities to harness geothermal power, along with the design, engineering, procurement and construction of a variety of power plants including Combined Heat and Power plants (CHP), Flash Steam and Binary Cycle power plants, and Geothermal District Heating Systems.
Enex also acts as an investor and forms joint ventures with local partners to finance, develop, construct and operate renewable energy projects. Its key markets include Germany, Eastern Europe, and the US.
The company is planning to build a geothermal power plant in Europe. In preparation for this, and to maximise expected financial gains, Enex conducted a study using Palisade’s The DecisionTools Suite comprising decision and risk analysis software for Microsoft Excel. The DecisionTools software PrecisionTree®, Palisade’s decision analysis tool, and risk analysis software @RISK, its Monte Carlo simulation tool, enabled Enex to determine the optimal timescale for procuring equipment during the construction of this new geothermal power plant along with the optimal plant size.
Geothermal power plants are developed to harness the thermal energy stored within the earth’s crust. This means that the site identified to build the plant has to be suitable for its construction.
The key geological factors that impact on the success or failure of a geothermal power plant project are temperature of reservoirs, flow of liquid from wells (or enthalpy and pressure), depths to which wells need to be drilled and the chemical conditions of the brine within the wells. These define the production capacity of a geothermal power plant as well as influence the investment and operation costs of the project.
As these are natural conditions, there is a huge amount of uncertainty for geothermal power plant suppliers about whether the selected site will be suitable for such a plant.
Enex has identified a site for its new power plant based on a calculated guess from studying the historical and geological data available for the region in question. However, Enex will only be able to determine whether the site is going to be viable for a geothermal power plant when it starts drilling the wells. This means that Enex will have to incur a proportion of the overall financial investment in the project upfront, without any certainty of success.
In a geothermal project, the cost of drilling the wells and the power plant (i.e. equipment) are the key factors that account for the bulk of the costs of a geothermal project. In fact, about 70 to100 components that are purchased for a power plant can account for anything between 30 to 80 per cent of the total cost of the project. This translates to approximately 1500-3500 /Kilo Watt of energy. The drilling of the wells accounts for the balance of the cost.
Further, given the engineering precision and complexity of power plant equipment, some of the critical and most expensive components such as turbines, heat exchangers and pumps have very long lead times. For instance, delivery time from the point of ordering a turbine until delivery can be over can be 75 weeks.
To ensure that construction and commissioning of the plant will be executed smoothly, with minimum delay and maximum profitability, the timing for ordering equipment with long lead times is critical.
However, identifying the optimal time for ordering components is very difficult. For example, if Enex pre-orders equipment before the first drilling of the well, there is a possibility that the company may find out after the drilling is completed (which can last up to four months) that the well needs to be closed due to sub-optimal well conditions pertaining to temperature, flow and depth of the well. Or that the results of the drilling will not be suitable for the components that were pre-ordered.
For this study, Enex made two key assumptions. Firstly, the plant would comprise two production wells. Secondly, if drilling results in sub-optimal conditions in a well twice in a row, the project would be considered a failure on account of the location not being suitable for a geothermal power plant.
The critical question that Enex needed to answer in this study was whether to procure the critical components with the longest manufacturing and shipping lead time before the first drilling, after the last drilling or between the first and last drilling.
Enex used PrecisionTree to model the different types of events that could potentially occur if the equipment was ordered before or after the drillings. For instance, if equipment is ordered before drilling the first well, whilst this will ensure that the equipment is available in time for use, Enex will be taking the risk of making a substantial investment without being certain that there would eventually be a need for it. Also, the company will need to order the components based on certain specifications that may or may not be suitable for the conditions of the wells. The possibility of a well being unsuitable and therefore being closed after initial drilling is considerable. Therefore, if the pre-ordered equipment is unsuitable, Enex will incur cancellation fees, which can be in the region of 20 per cent of the component cost. Cumulatively, across the different types of equipment ordered, this can amount to a substantial cost.
Additionally, after the last drilling, Enex will potentially have more information on the state of the well and will therefore be able to better determine the equipment specifications to optimise plant efficiency. For example, prior to drilling, Enex might order a turbine that was of a higher capacity than actually required, making it a more expensive component than necessary. Alternatively, the pre-ordered turbine may be under-specified, which could significantly impact the production levels of the power plant and therefore profitability of the project.
These different scenarios highlighted in the PrecisionTree model were then evaluated by Enex using Monte Carlo simulation in @RISK. This showed all potential outcomes, as well as the likelihood of every single event occurring.
For example, using historical and geological data, Enex used @RISK to simulate the production index and temperature to estimate the potential distribution of the peak production capacity of the plant.
Further, @RISK enabled Enex to estimate the optimal plant size. An accurate estimate of plant size is important as it can have huge ramifications on costs and profitability of the plant, especially when pre-ordering of equipment is being considered. For instance, if the pre-ordered equipment is under-sized or less powerful than required, Enex would incur a cost based on the lost production as the components will be unable to optimise production in the plant. On the other hand, if the pre-ordered equipment is over-sized or more powerful than required, Enex would have to incur an unnecessary investment cost for the larger components as well as the potential loss of efficiency as a result of the gap between the estimated size of components during the design phase and the actual conditions being too much.
Using a combination of PrecisionTree and @RISK, Enex weighed up the advantages and disadvantages of pre-ordering and post-ordering the equipment. It has been able to confidently conclude that the optimal time for the company to order equipment for this project in question is before the first drilling of the well. Extra investment costs would need to increase by approximately 400 per cent, to warrant a change in this decision. There is an 89 per cent probability of success for a plant size of 14.5 Mega Watts.
For this plant size, by pre-ordering equipment, the plant will be able to start production sooner and therefore reduce the time between construction and when the plant begins to generate income.
Viktor Thorisson, analyst at Enex, commented, “Using Palisade’s sophisticated technology, its PrecisionTree and @RISK tools have highlighted that this rather untraditional approach of ordering the critical components before the first drilling is likely to give us a higher expected profitability. We expect drilling for this plant to start in Europe shortly.”
Distributions used Normal Uniform Beta General as this was best suited for geological and historical data
Sensitivity analysis This was used to analyse impact of efficiency reduction, lost production and extra investment on the NPV
Quantitative techniques used For defining the probability of success to input into the PrecisionTree, a function of production index and temperature was plotted that would give a 10 per cent Internal Rate of Return. The simulation was then performed using @RISK to assess how many iterations would result below that line, as shown in graph 1.
For defining what temperature and what production index (PI) should be considered failure, a function was determined that would give 10% Internal Rate of Return on equity (IRR), which represents the minimum IRR for a project to be classified as successful.
When the PI approaches 0, the temperature necessary to obtain 10% IRR approaches infinity. Therefore, all values of PI below 0.2 and to the left of the plotted line are defined as failure.
Graph 1: Definition of failure as a function of temperature and PI
Graph 2: The sensitivity of the NPV for two decision paths This graph below shows that the expected extra investment cost due to pre-ordering (i.e. the mean expected extra investment cost due to under or over sizing the equipment) will need to increase by approximately 400% from the calculated base value to change the optimal decision. However, it can be seen that the slope of the line representing the expected NPV of the project, if the first well is drilled before ordering components, changes when the value of the X-axis is 200%. This figure represents a 200% increase in the extra investment cost due to pre-ordering of the components. This is due to the fact that at that point, the optimal decision in the PrecisionTree is to wait until all drilling is complete and therefore the NPV will be unaffected by the extra investment cost as a result of pre-ordering equipment.
Graph 2: The sensitivity of the NPV for two decision paths
DNV GL is an international organisation for ship classification, safety and quality control. It uses PrecisionTree and @RISK software (part of the DecisionTools Suite, Palisade’s complete toolkit of risk and decision analysis tools) to determine the risk of an incident occurring to a ship or its systems and the consequences should one occur. This enables cost-benefit analysis so that informed decisions can be made about the best strategies to mitigate risk events.
Formed from the merger of DNV and Germanischer Lloyd, DNV GL is the world’s largest ship and offshore classification organisation, the principle technical advisor to the global oil and gas industry, and a leading expert for the energy value chain including renewables and energy efficiency. The company offers a wide range of independent technical expertise including surveying and the safety certification of ships, offshore structures, wind turbines and other industrial units, as well as the certification of quality systems.
DNV GL’s core purpose is to safeguard life, property and the environment by enabling its customers to advance the safety and sustainability of their business.
In this context, DNV GL is a key player in major research and development projects for the shipping industry. A core element of their focus is enhancing safety and security in shipping by pro-active use of risk assessment and cost-benefit techniques.
A key activity for DNV GL’s Safety Research unit on national, international and internal projects is risk assessment for ships and ship systems. The aim is to identify risk reduction measures and deploy cost-benefit analysis to determine their economic feasibility. For instance, navigational aids will reduce the probability of collision and grounding accidents, while damage stability measures will limit the consequences of such an event, should it occur.
Risk assessments in recent years have differentiated ships by type, such as containership, crude oil tanker or general cargo ship. Each project sees the development of quantitative risk models, which can represent larger groups of the world fleet and the evaluation of the risks using internationally-agreed processes. Most projects are carried out in accordance with the Formal Safety Assessment (FSA) process developed by the International Maritime Organisation (IMO), which specifies the relevant steps for risk analysis, risk evaluation and risk reduction. Typically this requires the development of quantitative risk models that use fault trees for modelling the causes of events, and event trees to model the consequence sequences resulting from initial accident events.
DNV GL’s Safety Research unit uses Palisade’s decision tree software, PrecisionTree, to develop the risk models that apply to: (1) crew; (2) environment; and (3) property. (Risk to persons is usually quantified in terms of ‘potential loss of life’ or individual risk to the crew or passengers. Risk to the environment refers to potential damage by events such as oil spills or release of dangerous cargo. Risk to property relates to loss of, or damage to, the ship.)
In the case of investigating the risk for a specific ship type, typically the overall risk model contains sub-models addressing five accident categories: collision; contact; fire/explosion; grounding; and foundering. Each sub-model describes the relationship between the initial incident (e.g. an accident such as a collision between two ships) and the consequences. These models are based on ‘high-level’ event sequences, as illustrated in Figure 1.
Generic high-level event sequence for collision risk model
In order to limit the complexity of the risk model, representative scenarios are developed. The consequences assigned to the scenarios also consider a limited number of potential outcomes that are regarded to be representative for that specific scenario. For instance, the amount of oil spilled due to cargo hold damage will vary depending on factors such as: where on the ship the damage occurs (i.e. the number of cargo holds damaged and whether these are above or below waterline); the subdivision of cargo tanks (volume per tank); whether the tanks are filled to full capacity; and the density of the oil. In the risk model this variety of consequences will be represented by, for instance, two spill sizes.
The sequences in DNV GL’s PrecisionTree event trees are populated with occurrence probabilities (e.g. dependent probability that a ship in a collision is struck) that are derived from analysing casualty reports. These reports provide results from ship accident investigations and are available on a commercial basis or from publicly-available databases such as IMO Global Integrated Ship Information System (GISIS). Alternatively, casualty estimates can be developed using numerical models such as probabilistic damage models (estimating the probability of a certain level of damage and how likely the ship is to survive this) or passenger evacuation simulation (the modelling of passenger flow on board a vessel to estimate the percentage of passengers evacuated in a given time interval).
However, although databases provide key input for the development of quantitative risk models, they have also some shortcomings, including:
This results in increased uncertainty in the parameter values used in quantitative risk models.
In order to consider this uncertainty and make it visible in risk evaluation and cost-benefit assessments, DNV GL’s Safety Research unit uses @RISK to perform Monte Carlo simulation. This factors in the uncertainty of input parameters for the risk models, and illustrates the impact on the estimates provided for the assessment of a current level of risk, as well as the cost-efficiency of risk-mitigating measures.
PrecisionTree and @RISK are part of the DecisionTools Suite, Palisade’s complete toolkit of risk and decision analysis add-ins for Microsoft Excel. The software tools work together to help DNV GL’s Safety Research team get essential insights from their data.
“PrecisionTree makes it straightforward to quickly develop event trees. These are essential to our analysis when looking at how to reduce the both the number of shipping incidents and the consequences should they occur,” explains Dr. Rainer Hamann, Topic Manager Risk Assessment, Safety Research at DNV GL SE. “At the same time we have to be realistic about the accuracy of our data inputs and, by means of distributions and Monte Carlo simulation, @RISK enables us to be clear about the level of uncertainty contained in each model. Embedded in the Microsoft environment, it also allows us to incorporate standard Excel features, making it easy to use.”
In-depth risk analysis lays the groundwork for cost-benefit analysis – i.e. the assessment of whether the benefits of risk reduction strategies will be higher than the cost of implementing these. Shipping companies can therefore make informed decisions on complex issues on averting fatalities, and avoiding oil spills (with the cost of the latter increasingly taking into account the consideration clean-up costs, economic loss to related businesses such as fishing and tourism).
Risk analysis plays a key role in the design of ships and their systems in order to continually increase levels of safety.
AC Immune SA, a biopharmaceutical company focused on developing product candidates to treat neurodegenerative diseases, harnesses the power of Lumivero's products—specifically @RISK and PrecisionTree, to assess the value of the company’s development candidates leading to the overall enterprise value. Using @RISK, AC Immune calculates the risk adjusted net present values (rNPVs) for its preclinical and clinical drug candidates. Using PrecisionTree, the company values key decisions along the development pathway. Thanks to Lumivero, AC Immune has been able to manage risk, define prediction intervals, communicate clearly to internal stakeholders, and ask more ‘what if’ questions based off their models.
AC Immune SA is a clinical-stage biopharmaceutical company leveraging their two proprietary technology platforms to discover, design and develop novel proprietary medicines and diagnostics for prevention and treatment of neurodegenerative diseases (NDD) associated with protein misfolding.
Misfolded proteins are generally recognized as the leading cause of NDD, such as Alzheimer’s disease (AD) and Parkinson’s disease (PD), with common mechanisms and drug targets, such as amyloid beta (Abeta), Tau, alpha-synuclein (a-syn) and TDP-43. AC Immune’s corporate strategy is founded upon a three-pillar approach that targets (i) AD, (ii) focused non-AD NDD including Parkinson’s disease, ALS and NeuroOrphan indications and (iii) diagnostics. They use their two unique proprietary platform technologies, SupraAntigen and Morphomer to discover, design and develop novel medicines and diagnostics to target misfolded proteins.
AC Immune uses @RISK to assess the Company’s enterprise value, calculating risk-adjusted net present values (rNPVs) for certain preclinical and clinical product candidates. The company then combines each respective value and determines the ultimate “Sum of the Parts” for an overall indication of the company’s price per share. The company uses this internally generated value and bridges to potential variances in their share price (Nasdaq: ACIU) or price targets published by their covering analysts.
Each product candidate that AC Immune elects to value includes many uncertain variables which impact the projected net cash flows in the development and potential commercialization period for the product candidate. The typical inputs AC Immune uses in their @RISK models include, but are not limited to:
“Certain of these variables can be material and are difficult to derive a point estimate for, or can be difficult to otherwise source,” explains Julian Snow, AVP of Financial Reporting.
Typically, Snow uses a PERT distribution for his @RISK models for the risk-adjusted NPV. These inform AC Immune of the 90% prediction interval range, given the assumptions. In this example, the mean rNPV is expected to be around CHF 523m (illustrative example only).
AC Immune also uses tornado charts to showcase the impact of certain assumptions on the resultant value, all other variables held constant. “The company can then decide how best to minimize the impact of certain factors via additional research into the assumption or other potential adjustments,” says Snow.
In addition to @RISK, AC Immune also uses PrecisionTree. “PrecisionTree allows us to set up dynamic decision trees linked to underlying cash flows to understand the risk/return at a specific point in time along the development timeline,” says Snow. “Additionally, it helps us weight a decision such as to partner or not partner a potential product candidate.”
Other decisions include the expansion of a program, addition of a second indication for research, or assistance in license and collaboration deal structuring. “Assessing the value of one decision or another is valuable for the company,” Snow says.
For AC Immune, our products significantly improve the quality of the decision-making process, particularly with regard to allocation of resources and improving understanding of the magnitude of uncertainty on key assumptions.
“@RISK software allows our company to sensitize key variables using various distribution methods, as well as convey the sensitivity in impact on the ultimate risk-adjusted net present value,” says Snow. “The software also conveys results in clear output graphics for easy reporting to relevant stakeholders.”
Prior to using our products, Snow and his team relied on Excel functionalities to calculate the relevant data. “We viewed this process as static and more cumbersome to maintain,” says Snow. “Therefore, with @RISK, AC Immune was able to enhance its internal valuation and reporting capabilities.”
According to Snow, other companies in this space do not typically leverage deterministic analysis to their valuation approaches. “Most peers use more static excel models that cannot capture or answer a more robust set of questions that arise over a long development timeline,” says Snow.
In addition, when comparing the cost-benefit of programs, assessing internal funding needs, assessing potential licensing and collaboration terms and other matters relevant to understanding the potential financial return from a product candidate, “AC Immune is able to ask and answer more questions than peers as a result of the use of the software,” Snow says.
Thanks to Lumivero's products, AC Immune has seen both tangible and intangible benefits, including:
Thanks to data-driven, deterministic analysis, AC Immune’s cutting-edge drug discovery technologies are better enabled to potentially help patients around the world.
Metaproject was asked by the Chilean government to advise on the best way to rescue the miners. Manuel Viera developed a new model in PrecisionTree to predict the best way to rescue the miners, calculating the method that would subject them to the least risk.
On August 5, 2010, a wall column in the San José mine in northern Chile collapsed, trapping 33 miners 700 meters underground. A second fall two days later, blocked access to the lower parts of the mine. The challenge was how to rescue the miners as quickly as possible, as well as ensure that their mental and physical health was maintained while the rescue mission was planned and implemented.
The rescue operation was very risky, not least because it was possible that another landslide could occur, with causal factors including geological faults, lack of accurate information from the plans of the inside of the mine, and insufficient knowledge about the structural geology of the mine. The additional drilling required to rescue the miners could have caused walls to collapse further as a result of micro fractures and faults in the rock.
As a result, it was initially believed that the operation to save the miners would be a very long process. The first estimates suggested that they would have to wait around five months to be brought back to the surface, although this was then revised to three or four months. However, 65 days after the first rockfall, one of the rescue drills broke through to the underground chamber, and the first miner was brought to the surface on October 13, 2010. The rescue operation was completed 22 hours later.
During the crisis, mining expert Manuel Viera, the CEO and managing partner of engineering consultancy Metaproject, was asked by the Chilean government to advise on the best way to rescue the miners. Mr. Viera developed a new model to predict the best way to rescue the miners. This calculated the method that would subject them to the least risk.
The magnitude of risk can be defined as a multiple of exposure, probability and severity, where:
Metaproject used Palisade’s decision tree analysis tool, PrecisionTree, to evaluate the various rescue alternatives from a technical and economical perspective. This enables an informed decision to be made with regard to selecting the option that is the least risky to the miners.
A key decision was whether to raise the miners to 300 meters below the surface, or to keep them in their current location near the refuge at 700 meters. There were also several drilling options for reaching the trapped miners:
A: Use of a Strata 950 probe to drill a ‘well’ 66 centimeters (cm) in diameter through which the miners could be lifted. The greatest risk was that the well would collapse when the cage containing the miners was raised to the surface. This rescue operation would take around three or four months, depending on the quality of rock, and other obstructions which were, at the time, unknown.
B: Because as many rescue alternatives as possible needed to be considered, the use of the larger Schramm T-130 drilling probe was also explored. This would enable a wider well (between 66cm to 80cm diameter) to be drilled. The risks were similar to those in A but, because the larger well meant that the rescue would be quicker and therefore reduce the exposure of the miners, the magnitude of risk was less. The timescale was similar.
C: Use of the proven DTH (Down The Hole) 26” probe or oil drilling RIG-421. Fast and powerful, this could potentially reduce the rescue time to one month, which would have decreased the miners’ exposure to risks.
D: A vertical tunnel to the miners could have been built. This would have required construction and was therefore more expensive and would take longer – however, it was an effective option. The key risks were ‘slabs’ (ventilation problems during the building process as the tunneling would have been carried out ‘blind’), general ventilation issues in the confined space, and the potential lack of patience of the miners.
E: An alternative (horizontal) tunnel could have been excavated from an already-drilled well under the collapsed site to reach the miners directly in the refuge. From here they could walk along the tunnel and be brought to the surface
The main issues and risks that had to be factored in to any calculation are as follows:
PrecisionTree presented a matrix of statistical results for each branch tree (i.e. rescue option). This made it is possible to ascertain, for example, that for some of the drilling options it was feasible to move the miners in two stages, but for others it was not, due to logistical problems.
The PrecisionTree analysis showed that the best option for rescuing the miners was to use the Schramm T-130 (Option B) followed by Option C, the DTH QL 200 (which was replaced by the drilling RIG-421). In addition, it was recommended that both options were used at the same time. Using two techniques would diminish the risk and increase the reliability of the rescue mission. The option of bringing the miners to 300 meters first was rejected.
Manuel Viera explains: “Palisade’s PrecisionTree is an excellent tool for modelling and conceptualizing real-life problems and analyzes alternatives that are technically feasible and economically viable in an Excel format. This can be applied to complex problems that have a big impact, and was therefore ideal for a major disaster such as the trapped miners.”
The actual rescue operation used three drills at the same time: Drill A, the Strata 950 raise bore machine; Drill B, the Schramm T-130 machine; and Drill C, the RIG 442 machine. As predicted by the Metaproject PrecisionTree analysis, Drill B was the first to reach the miners.
Asher Drory of the University of Toronto’s Rotman School of Management uses @RISK in his graduate-level Financial Management course.
Understanding how to use Monte Carlo simulation to account for risk in decision-making is quickly becoming a required skill for today’s business leaders, says Asher Drory, Adjunct Professor of Finance at University of Toronto’s Rotman School of Management.
“Many leading corporations are now using Monte Carlo simulation in their business cases,” Professor Drory says. “Students who want a leg up with such corporations should seek out all opportunities to get experience in working with Monte Carlo simulation.”
In his Financial Management course, Drory uses @RISK to teach some 200 graduate students each year how to use Monte Carlo simulation in analyzing working capital and capital budgeting decisions. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities that will occur for any choice of action.
For example, Drory’s classes use @RISK and Monte Carlo simulation to look at:
*How forecasts of financial statements are needed to determine future funding requirements in working capital decisions. *How forecasts of future free cash flows are required and risk must be assessed in capital budgeting analysis.
Separately, Drory and his students use Lumivero's (previously Palisade) PrecisionTree software in modeling decision tree analysis for new product development. The students have access to the entire DecisionTools Suite which is loaded on all of the computers in the Rotman Finance Laboratory.
“All key financial decisions such as investing, operating and financing decisions can benefit from Monte Carlo simulation,” says Prof. Drory, who has taught at the University of Toronto for 21 years. “I ran across @RISK about 5 years ago when I was looking for PC-based Monte Carlo simulation tools. @RISK has a straightforward and easy-to-use interface.”
Originally published: Dec. 17, 2021
Updated: June 7, 2024
A consultant with Triangle Economic Research, an Arcadis company, Tim Havranek works with Fortune 500 clients to identify and quantify their potential environmental liabilities and to simulate the least costly routes to meeting their responsibilities.
Large industrial companies operating out of many different locations and facilities often have numerous actual or potential environmental legacies that linger for decades as financial liabilities. Longtime DecisionTools® user Tim Havranek has made a successful career out of helping companies manage their “environmental risk portfolios” cost-effectively. A consultant with Triangle Economic Research, an Arcadis company, Havranek works with Fortune 500 clients to identify and quantify their potential environmental liabilities and to simulate the least costly routes to meeting their responsibilities. Many of the complex cases that he and his associates at TER work on involve hundreds of millions of dollars, multiple stakeholders, and a powerful amount of modeling. As he has for years, Havranek relies on @RISK and PrecisionTree® to compare the scenarios and the decision paths that guide his clients’ decisions.
In one recent case, a major industrial manufacturer sold approximately 15 of its active plants to another manufacturer. The terms of these sales included the provision that the original corporate owner would retain responsibility for historical environmental impacts. As time passed, environmental claims against the original corporate owner continued, and the corporation sought appropriate means of reducing cost and risks, such as receiving regulatory closure and/or selling the properties and liabilities to other parties.
Also, the historical environmental impacts at times potentially limited the ways that the new owners could manage and expand the properties. This often led to disagreements. Such disagreements were anticipated during the sale, and the purchase agreement included an arbitration clause to address issues as they arose.
The corporation identified three possible solutions:
Havranek used Triangle’s time-tested procedure for framing the model. He met with all the stakeholders to identify all known cost elements, inherent uncertainties, and future potential liabilities for each of the three alternatives. The model included more than 100 unknowns. In order to pinpoint those issues on which the company would need to prevail in arbitration, Havranek and his team performed sensitivity analyses on the cost drivers identified by the framing meeting participants. The model was then run using @RISK and PrecisionTree.
The model had three output cells, one for each alternative. The outcome was intriguing: the least costly alternative was to stay with the asset purchase agreement and arbitrate as needed. The model indicated an expected value savings of more than $30 million. An outside actuarial group verified and validated the model using proprietary actuarial software. In the end, the actuarial group’s projections agreed not only with Triangle’s inputs and assumptions but also with its findings.
Although other companies may turn to proprietary software to parse environmental risks, Havranek sees no reason to use custom software to accommodate the many complex inputs he includes in his models. He likes the convenience of working in Excel and being able to share his results with clients. But most important, he says, “I am always trying to streamline my models. To simplify simulations you need the flexibility that proprietary tools don’t always offer. These tools have that flexibility without any sacrifice of power. @RISK and PrecisionTree have all the power you need.”
U.S. Army Corps of Engineers (USACE) divisions use @RISK and the DecisionTools Suite for dam and levee safety, asset management, cost estimation, construction, SMART Planning, regulatory functions, program management, project management, and more.
The U.S. Army Corps of Engineers (USACE) is comprised of over 37,000 dedicated civilians and military personnel who deliver engineering services across 130 countries worldwide. With environmental stability as a guiding principle, the Corps is involved in projects as diverse as construction, natural resource management, energy and sustainability, capacity building, and more.
Many USACE division are already using @RISK and the DecisionTools Suite for engineering projects, including the Institute for Water Resources (IWR), the Hydrologic Engineering Center (HEC), the Risk Management Center (RMC), and USACE divisions in Buffalo, Great Lakes and Ohio River, Hanover, Huntington, Kansas City, Philadelphia, St. Louis, Sacramento, San Francisco, and Walla Walla. See the Greenup Locks and Dam case study for one example.
Several of the successful applications involve Lumivero Custom Development partnering with USACE engineers to build custom software solutions.
The U.S. Army Corp of Engineers turned to Lumivero Consulting to help incorporate an uncertainty element into dam and levee safety models. Lumivero consultants built a tailored @RISK-based application that determines the exposure of each project to loss of life, damage, and structure fragility. The Excel-based model uses @RISK’s Monte Carlo simulation to probabilistically assess potential risk areas.
With the safety models in place, USACE project leaders also wanted an efficient way to ensure all their engineers could implement Monte Carlo simulation into any new project analysis. Lumivero Custom Development added a tailor-made interface to the model, allowing engineers throughout the agency to quickly and easily create models, simulations, and reports in an automated, standardized manner. The solution is used today by USACE for all new dam and levee projects.