At the core of any simulation model is a set of probabilities. Are they the right probabilities? Are they actually representative of the uncertainties you are facing? And how do you know?
In this talk, Tim Nieman will discuss some of the pitfalls in assessing probabilities, especially the ever-present cognitive biases, and how to minimize (but not eliminate!) these issues. The webinar will include some fun polls to test and potentially illuminate your own biases.
The 2024 Lumivero Virtual Conference on Sept. 25-26 brought together a diverse global community of data professionals, researchers, academics, and students for sessions featuring data intelligence, research trends, data analysis, risk management, and student success. With more than 6,200 registrations from 161 countries, the event highlighted Lumivero's ever-growing impact across industries such as education, social sciences, and public health.
Conference Highlights:
Missed it? You can still catch the sessions on demand! In this article, we’ll highlight some of the key sessions and impactful themes from the event to help you get started.
The conference focused on key themes addressing the evolving needs of researchers, data analysts, and professionals. Sessions covered practical strategies, the role of artificial intelligence (AI), innovative approaches to research and data management, and more.
These themes not only addressed the pressing needs of today’s professionals, but also provided valuable tools and strategies to help attendees stay ahead in their respective fields.
The 2024 Lumivero Virtual Conference featured dynamic keynote sessions led by thought leaders at the forefront of research and data analysis. These sessions offered deep insights into the latest trends, challenges, and opportunities in the industry, making them must-watch experiences for all!
Missed it live? All sessions are available on demand! Expand your skills, stay ahead of trends, and explore new strategies to make data-driven decisions.
There are many different types of waste in manufacturing – waste that can cost the economy many billions of dollars per year. For example, a 2022 McKinsey report on food loss (food wasted during harvest and processing) estimated a global cost of $600 billion per year for growers and manufacturers. Unplanned downtime due to breakdowns of production equipment is another type of waste, and a 2023 analysis of the cost of downtime by Siemens (p. 2) estimates that this wasted time costs Fortune Global 500 companies 11% of their annual turnover.
Management experts have tried to solve the problem of waste in manufacturing for generations. Today, many organizations have adopted Lean Six Sigma, a popular managerial methodology that helps improve processes, reduce waste, and ensure the quality of products.
In this article, you'll gain clear definitions of Lean and Six Sigma, a deeper understanding of the principles of Lean Six Sigma, and details on the Lean Six Sigma certifications available to practitioners.
First, let’s define Lean Six Sigma. As mentioned above, Lean Six Sigma is a management methodology that aims to streamline operations, boost efficiency, and drive continuous improvement. While it has its roots in manufacturing, Lean Six Sigma has also been adopted by other industry sectors including finance and technology.
Lean Six Sigma originates from two separate methodologies, Lean and Six Sigma. Both these methodologies have their own rich histories.
Lean principles have their roots in the automotive manufacturing sector. According to an article by the Lean Enterprise Institute, Lean principles emerged from the Toyota Production System (TPS), which was developed in Japan after WWII.
Taiichi Ohno, a production expert and Executive Vice President at Toyota, is considered the father of TPS. According to his entry in the Encyclopedia Britannica, Ohno developed a production system he called “just-in-time” manufacturing. The Toyota Europe website describes the just-in-time approach as “making only what is needed, when it is needed, and in the quantity needed, at every stage of production.”
When the TPS began to be studied and implemented in the United States, it evolved into Lean manufacturing. “Lean” was coined by then-MIT researcher John Krafcik, and defined in the 1996 book Lean Thinking by the researchers James Womack and Daniel Jones. In the introduction to their book, Womack and Jones describe Lean as a methodology which “provides a way to specify value, line up value-creating actions in the best sequence, conduct these activities without interruption whenever someone requests them, and perform them more and more effectively.” (p. 6) Lean principles have since moved beyond industrial production to construction, technology, and other industries.
According to an article by Six Sigma education provider Six Sigma Online, Six Sigma is a data-driven method developed by engineers at Motorola in the 1980s to reduce defects in manufacturing processes. The term “Six Sigma” refers to a process that produces “no more than 3.4 defects per million opportunities, which equates to six standard deviations (sigma) between the process mean and the nearest specification limit.”
Six Sigma spread to other businesses, achieving mainstream popularity when Jack Welch, then-CEO of General Electric, embraced it as a key part of GE's business strategy in the 1990s. In 2011, it was formally standardized by the International Standards Organization.
In the early 2000s, organizations realized that combining Lean’s focus on waste reduction with Six Sigma’s focus on process improvement through data-driven techniques could create a powerful, complementary approach to process optimization. Lean Six Sigma was born as a hybrid methodology focused on both eliminating waste (Lean) and reducing defects and variation (Six Sigma). Or, as Momal put it in his webinar on Monte Carlo Simulation, “when we're talking Six Sigma, we mainly talk about quality, and when we're talking lean, we mainly talk about speed.”
The methodology of Lean Six Sigma revolves around key principles drawn from both foundations. These principles guide how businesses can identify problems, find solutions, and sustain improvements. In an extract from the book Lean Six Sigma for Leaders published on the Chartered Quality Institute’s website, authors Martin Brenig-Jones and Jo Dowdall list these principles:
Focus on the Customer
Lean Six Sigma begins with ensuring that the organization understands the customer’s needs and expectations, then aligns processes to meet those requirements. This means eliminating activities that do not directly contribute to customer satisfaction.
Identify and Understand the Process
Before improving any process, it's essential to understand how it works. Lean Six Sigma uses tools like process mapping to visualize workflows and identify bottlenecks or unnecessary steps. The aim is to achieve a smooth, consistent process that maximizes efficiency.
“Manage by Fact” to Reduce Variation and Defects
Six Sigma emphasizes reducing variation within processes, ensuring that outcomes are consistent and predictable. This principle is based on data analysis and statistical tools that help identify the root causes of defects or inefficiencies. By reducing variation, companies can deliver products or services that meet quality standards with minimal defects.
Eliminate Waste
Lean principles focus on identifying and eliminating different types of waste within a process. Waste can be anything that doesn’t add value to the final product, such as excess inventory, waiting time, unnecessary movement, or overproduction. The goal is to streamline processes, minimize resource usage, and increase value-added activities.
There are seven types of waste Lean aims to eliminate. These were originally identified during the development of the TPS. Toyota describes them in a 2013 article about the TPS as:
Empower Teams and Foster Collaboration
Lean Six Sigma emphasizes teamwork and empowering employees to contribute to process improvements. Employees are trained in Lean Six Sigma tools, creating a culture of continuous improvement.
Continuous Improvement (Kaizen)
Both Lean and Six Sigma emphasize kaizen, a Japanese term meaning “continuous improvement.” The Kaizen Institute explains that this principle also originated from the TPS. Kaizen involves regularly assessing processes to make incremental improvements.
Data-Driven Decision Making
One of the core elements of Six Sigma is its reliance on data to make decisions. Lean Six Sigma practitioners use data to understand the current state of processes, measure performance, and determine whether improvements have been successful.
Practitioners can pursue certifications in Lean Six Sigma to demonstrate their ability to apply the principles to projects and processes. These certifications are described as “belts,” and follow a color system similar to that found in many East Asian martial arts. An article from the consultancy Process Management International lists the belt certifications from newest practitioner to most experienced starting with the white belt to the final master Six Sigma black belt:
Now that you’ve explored the fundamentals of Lean Six Sigma, you’re ready to discover how powerful risk analysis tools like @RISK can further enhance project outcomes.
Check out the next article, Using @RISK to Support Lean Six Sigma for Project Success, where we’ll showcase real-world examples from François Momal’s webinar series, demonstrating how organizations apply Monte Carlo simulation in @RISK to successfully implement Lean Six Sigma.
Ready to get started now? Request a demo of @RISK.
In our previous article, Introduction to Lean Six Sigma, we discussed the fundamentals of Lean Six Sigma, exploring how it combines the principles of lean manufacturing and Six Sigma to drive process improvement and operational excellence.
Now, we’re taking the next step by diving into how risk analysis software, specifically with Lumivero’s @RISK software, can enhance Lean Six Sigma initiatives. This post will focus on how Monte Carlo simulation can empower organizations to predict, manage, and mitigate risks, ensuring the success of Lean Six Sigma projects by drawing from insights shared in François Momal’s webinar series, “Monte Carlo Simulation: A Powerful Tool for Lean Six Sigma” and “Stochastic Optimization for Six Sigma.”
Together, we’ll explore real-world examples of how simulation can optimize production rates, reduce waste, and foster data-driven decision-making for sustainable improvements.
Monte Carlo simulation, as a reminder, is a statistical modeling method that involves making thousands of simulations of a process using random variables to determine the most probable outcomes.
The first model Momal presented involved predicting the lead time for a manufacturing process. He described the question this model could answer as, “when I give a fixed value for my process performance to an internal or external customer, what is the associated risk I take?”
Using data on lead time for each step of a six-step production process, @RISK ran thousands of simulations to determine a probable range for the lead time. It produced three outputs:\
Example 1: Probable lead time as seen by the customer showing two output graphics: output histogram for risk evaluation and sensitivity analysis showing what the main levers are.
The left-hand chart shows the probability distribution curve for the lead time which allows the production manager to give their customer an estimate for lead time based on probability. The other two charts help identify which steps to prioritize for improvement. The upper right-hand chart shows which of the six steps contribute most to variation in time, while the lower-right hand chart describes how changes to the different steps could improve that time.
@RISK also allows production managers to set probability-based improvement targets for each step of the process using the Goal Seek function.
Example 1: Goal Seek for step 1. Example of an industrial assembly process.
As mentioned above, the “Lean” aspect of Lean Six Sigma often refers to speed or efficiency of production. Lean production relies on being able to accurately measure and predict the hourly production rates of an assembly line.
Momal’s second example was a model which compared two methods of finding estimated production rates for a five-step manufacturing process: a box plot containing 30 time measurements for each step, and a Monte Carlo simulation based on the same data.
Example 2: Computation of the true hourly production rate (parts per hour).
Both the box plot and the Monte Carlo simulation accounted for the fact that step two of the production process was often slower than the others – a bottleneck. However, the box plot only calculated the mean value of the time measurements, arriving at a production rate of approximately 147 units per hour. This calculation did not account for variability within the process.
Using @RISK to apply Monte Carlo simulation to the model accounts for this variance. The resulting histogram shows that the assembly line only achieves a production rate of 147 units per hour in 37.2% of simulations.
Example 2: True production rate risk assessment.
A plant manager trying to achieve 147 units per hour will be very frustrated, given that there is a 62.8% chance the assembly line will not be able to meet that target. A better estimate for the engineers to give the plant manager would be 121.5 units per hour – the production line drops below this rate in only 10% of simulations:
Example 2: True production rate risk assessment, accepting a 10% risk.
Furthermore, with the Monte Carlo simulation, engineers working to optimize the assembly line have a better idea of exactly how much of a bottleneck step two of the process causes, and what performance targets to aim for to reduce its impact on the rest of the process. “The whole point with a Monte Carlo simulation,” explained Momal, “is the robustness of the figure you are going to give.”
From Lean modeling, Momal moved on to Six Sigma. Monte Carlo simulation can be applied to tolerancing problems – understanding how far a component or process can deviate from its standard measurements and still result in a finished product that meets quality standards while generating a minimum of scrap.
Momal used the example of a piston and cylinder assembly. The piston has five components and the cylinder has two. Based on past manufacturing data, which component is most likely to fall outside standard measurements to the point where the entire assembly has to be scrapped? A Monte Carlo simulation and sensitivity analysis completed with @RISK can help answer this question.
Example 3: Tolerancing of an assembled product, showing a cost-based stack tolerance analysis chart.
In this tolerance analysis, the assembly gap (cell C6) must have a positive value for the product to fall within the acceptable quality range. Using a fixed quality specification, it’s possible to run a Monte Carlo simulation that gives the probability of production meeting the specified assembly gap given certain variables.
Example 3: Tolerancing of an assembled product, showing sensitivity analysis.
Then, using the sensitivity analysis, engineers can gauge which component contributes the most variation to the assembly gap. The tornado graph on the right clearly shows the cylinder wall is the culprit and should be the focus for improving the quality of this product.
Stochastic optimization refers to a range of statistical tools that can be used to model situations which involve probable input data rather than fixed input data. Momal gave the example of the traveling salesman problem: suppose you must plan a route for a salesman through five cities. You need the route to be the minimum possible distance that passes through each city only once.
If you know the fixed values of the distances between the various cities, you don’t need to use stochastic optimization. If you’re not certain of the distances between cities, however, and you just have probable ranges for those distances (e.g., due to road traffic, etc.), you’ll need to use a stochastic optimization method since the input values for the variables you need to make your decision aren’t fixed.
Stochastic optimization: A double nested loop with decision variables (also named optimization variables).
Within the stochastic optimization simulation, a Monte Carlo simulation is completed for each variable within the “inner loop,” and then another Monte Carlo simulation is run across all the variables using different values (the “outer loop”).
For Lean Six Sigma organizations, stochastic optimization can support better project planning. Momal’s first model showed how to run a stochastic optimization to determine which project completion order would maximize the economic value add (EVA) of projects while minimizing time and labor costs so they remain within budget.
Example 4 Choice of Six Sigma projects to be launched in priority.
To use @RISK Optimizer, users must define their decision variables. In this model, Momal decided on simple binary decision variables. A “1” means the project is completed; a “0” means it isn’t. Users must also define any constraints. Solutions found by the simulation which don’t fit within both constraints are rejected.
Example 3: Choice of Six Sigma projects, showing optimization parameters and solutions that don’t meet the two constraints.
The optimization was run with the goal of maximizing the EVA. With @RISK, it’s possible to watch the optimization running trials in real time in the progress screen. Once users see the optimization reaching a long plateau, it’s generally a good time to stop the simulation.
Example 4: Choice of Six Sigma projects showing optimization run: total EVA stepwise maximization.
In this instance, the stochastic optimization ran 101 trials and found that only 61 were valid (that is, met both of the constraints). The best trial came out at a maximum EVA of approximately $9,100,000. The projects selection spreadsheet showed the winning combination of projects:
Example 4: Choice of Six Sigma projects optimization results.
Of the eight candidate projects involved, @RISK says that projects 2, 4, and 7 will meet the budget cost and labor time constraints while maximizing the EVA.
Next, Momal showed how stochastic optimization can be applied to design problems – specifically within the Design for Six Sigma (DFSS) methodology. DFSS is an approach to product or process design within Lean Six Sigma. According to a 2024 Villanova University article, the goal of DFSS is to “streamline processes and produce the best products or services with the least amount of defects.”
DFSS follows a set of best practices for designing components to specific standards. These best practices have their own terminology which Six Sigma practitioners must learn, but Momal’s model can be understood without them.
The goal of this demonstration was to design a pump that minimizes manufacturing defects and cost per unit.
Example 5: Pump DFSS design.
The model used to set up the stochastic optimization included a set of quality tolerances for the flow rate of the pump – this is what is known in DFSS as the “critical to quality” (CTQ) value – the variable that is most important to the customer. Decision variables included motor and backflow component costs from different suppliers as well as the piston radius and stroke rate. The goal was to minimize the unit cost and the quality level of the pump while meeting the flow rate tolerances.
Example 5: Pump DFSS design showing decision variables and tolerances.
As with the previous model, Monal demonstrated how to define the variables and constraints for this model in @RISK.
Example 5: Pump DFSS design, answering the question of “how can we guarantee a certain level of quality?”.
Then, when Monal ran the simulation, he again watched the live progress screen within @RISK to see when a plateau was reached in the results. He stopped the simulation after 1,000 trials.
Example 5: Pump DFSS design, showing stochastic optimization monitoring.
The simulation showed that trial #991 had the best result, combining the lowest cost while meeting the CTQ tolerances. Finally, @RISK updated the initial stochastic optimization screen to show the best options for supplier components.
Experiments are necessary in manufacturing, but they are expensive. Six Sigma methodology includes best practices for design of experiments (DOE) that aims at minimizing the cost of experiments while maximizing the amount of information that can be gleaned from them. Monal’s final model used XLSTAT to help design experiments which would solve an issue with a mold injection process that was causing too many defects – the length of the part created should have been 63 mm.
The approach involved running a DOE calculation in XLSTAT followed by a stochastic optimization in @RISK. There were three known variables in the injection molding process: the temperature of the mold, the number of seconds the injection molding took (cycle time), and the holding pressure. He also identified two levels for each variable: an upper level and a lower level.
Example 6: Monte Carlo simulation – DOE coupling.
Six Sigma DOE best practices determine the number of prototype-runs an experiment should attempt by taking the number of levels for the variables, then raising them to the power of the overall number of variables, and finally multiplying that value by five. In this instance, 23 is equal to 8, and 8 x 5 is 40. There should be 40 real prototypes generated. These were modeled with XLSTAT DOE. The “response 1” value shows the length of the part created.
Example 6: Coupling between Monte Carlo and DOE.
XLSTAT then generated a list of solutions – combinations of the three variables that would result in the desired part length. The row circled in red had the lowest cycle time. It also created a formula for finding these best-fit solutions.
Example 6: Coupling between Monte Carlo and DOE.
These were all possible solutions, but were they robust solutions? That is, would a small variation in the input variables result in a tolerably small change in the length of the part created by the injection molding process, or would variations lead to unacceptable parts?
For this second part of the process, Momal went back to @RISK Optimizer. He defined his variables and his constraint (in this case, a part length of 63). He used the transfer function generated by the XLSTAT DOE run.
Simulation using the results of a DOE.
Next, he specified that any trials which resulted in a variation of more than three standard deviations (three sigma) in the variables or the length of the part should be rejected.
@RISK optimization model set up.
Then he ran the stochastic optimization simulations and watched the outputs in real time.
RISKOptimizer Watcher of all Trials (ongoing optimization).
He stopped the trials once a plateau emerged. @RISK Optimizer automatically placed the values from the best trial into his initial workbook.
Best solution given by RISKOptimizer.
Sensitivity analysis, this time using a Pareto chart instead of a tornado graph, showed that the primary factor driving variance in trial results was the hold pressure:
Pareto chart examples including the contribution of the variables.
This gave him experimental data that could be used to inform the manufacturing process without the cost of having to run real-world experiments.
Data-driven manufacturing processes that lead to better efficiency, less waste, and fewer defects – that’s the power of the Lean Six Sigma approach. With @RISK and XLSTAT, you gain a robust suite of tools for helping you make decisions that align with Lean Six Sigma principles.
From better estimates of production line rates to designing experiments to solve manufacturing defect problems, the Monte Carlo simulation and stochastic optimization functions available within @RISK and XLSTAT can support your efforts toward continuous improvement.
Ready to find out what else is possible with @RISK? Request a demo today.
Getting started with Lean Six Sigma might feel challenging, but with ready-made @RISK example models available for download, you can quickly explore the power of Six Sigma – all in Microsoft Excel.
These models can help you test concepts, run simulations, and analyze potential improvements to your methods using @RISK software – offering hands-on experience without starting from scratch.
1. Six Sigma Functions:
A list of @RISK's six sigma functions – what they mean and how they work.
2. Six Sigma DMAIC Failure Rate Risk Model:
Predicts failure rates using RiskTheo functions and defines key quality metrics like LSL, USL, and targets for each component.
3. Six Sigma DOE with Weld:
Demonstrates DOE principles in welding, using @RISK’s functions to optimize process quality.
4. Six Sigma DOE with Catapult:
Illustrates Six Sigma optimization through a catapult-building exercise using Monte Carlo simulation.
5. Six Sigma DMAIC Failure Rate Model:
Calculates defect rates by monitoring product components against predefined tolerance limits.
6. Six Sigma DMAIC Yield Analysis:
Pinpoints production stages with the highest defect rates and calculates process capability metrics for improvement.
Download these models today to quickly explore Six Sigma principles in action with @RISK!
Join us as we dive into the world of data in our free virtual conference! Explore new techniques for qualitative, mixed methods, and statistical data analysis, learn best practices for ensuring student success in field experience outcomes, and hear how experts in your industry are leveraging Lumivero software to drive innovation and strategy.
Plus, with countless Lumivero software workshops and networking sessions, you’ll walk away with clear, actionable insights that let you take your research and data analysis to the next level straight away.
With part shortages causing issues at more than 50% of all manufacturers, it’s no surprise that they are looking for better ways to predict where supply issues might crop up. Even the top manufacturing companies need better, data-driven techniques to help them stay ahead.
Are you using all the tools at your disposal to optimize production and improve your manufacturing supply chain?
The article below explains how applying Monte Carlo simulation in manufacturing contexts can help you see all possible scenarios which help develop process improvements – saving time, money, and stress. Read on to learn the role of predictive manufacturing analysis and what it can do for you.
Monte Carlo simulation (MCS) is a method of predicting the most likely outcomes from running thousands of possible scenarios with random variables. Such simulations use a series of probability percentages to work out how likely different outcomes are to occur. An analyst can run thousands of these simulations and derive conclusions about the future of their business.
The Monte Carlo method is not limited to assisting in finance, construction, and energy. Other areas, such as the manufacturing sector, can also benefit from its use. It acts as a critical tool for risk management and decision-making, helping people mitigate said risks that might have otherwise gone unforeseen.
Running these simulations results in more than a single outcome. The purpose of the Monte Carlo technique is to allow people to see probabilities of all possible outcomes coming to pass.
This is beneficial in a range of industries, especially in manufacturing where a single decision can be the difference between making a profit or taking a loss. Monte Carlo simulation is often used in manufacturing for supply chain and logistics optimization, forecasting, and pinpointing risks.
Read Monte Carlo Simulation 101 >>
With access to a Monte Carlo simulation's results, risk managers can:
Studies such as those by Farooq, S., Naseem, A., Ahmad, Y., et al. (2024) have shown that Monte Carlo simulation allows for "improved strategy for prioritizing risks." In their study, MCS discovered five times as many high-risk outcomes as conventional methods.
Such vital information can significantly help a company that wants to ensure its financial safety. For example, the results could inform a firm that while they’re likely to meet all production targets this quarter, they should also consider diversifying their strategies to brace for less optimistic performances in more severe recessions. Conclusions such as these mean businesses can take confident steps forward toward protecting their future.
When you plan to run Monte Carlo simulations, you should perform the following steps:
It is also imperative to make sure that the setup process uses accurate information to ensure the best results.
Begin by determining the factors that could impact your goal. Examples include:
You also want to consider the economic conditions of the business and worldwide. Consider:
For example, consider a manufacturing plant producing a new type of computer chip in an unstable economic climate. You might input factors such as machine efficiency and expected processing times, raw material availability, and worker productivity, to name just a few – each subject to its own set of uncertainties.
Monte Carlo doesn't provide a definitive answer but a vivid tapestry of likely scenarios by running multiple simulations and offering possible outcomes – allowing you to make an informed decision.
With the Monte Carlo method, input distributions are used instead of input variables to better model how inputs with significant uncertainty could behave and influence the simulation outcome. This method ensures the outcomes are more likely to predict what might occur in the real world.
You can also use historical data to ensure the most accurate simulation possible. This is not to say the future will proceed precisely as the past has, but it may help describe predictable patterns in your business.
You will also need to:
Taking these steps will give you more accurate results and can help you make better predictions.
With software such as @RISK, you can define these inputs, set your model's parameters, and watch as @RISK’s distribution graphs display the range of possibilities.
For example, Novelis, the world's largest aluminum-rolling and recycling company, relies on Monte Carlo simulation to help rank its R&D projects by risk.
Novelis recognized the need for an improved structure around the risk evaluation of projects. As such, they could not communicate which areas of the company presented the highest risk – vital information for all key stakeholders. The lack of information presented a significant danger to the potential ROI of each project and, in many cases, could have even led to major losses.
By using @RISK to analyze how each link in their production chain influenced one another, they could collect data on the risks present across all work within their company. They then iterated on processes and allocated resources to ensure manufacturing efficiency improvement.
Monte Carlo simulations can help you better manage risks and uncertainties in your business. The conclusions you draw from them can lead to better data-driven decision-making and an increased ability to overcome even significant challenges.
Lumivero offers user-friendly and efficient methods for running a Monte Carlo simulation in manufacturing with our @RISK and DecisionTools Suite analysis software. These tools allow you to run simulations on your datasets all within Microsoft Excel.
As you approach new manufacturing hurdles, consider the ways in which this timeless methodology, when coupled with the world’s leading risk analysis software @RISK and DecisionTools Suite, could revolutionize your approach to decision-making.
Request a demo or download a free trial of @RISK to learn more about how you can revolutionize your manufacturing process and help you prepare for the future, today.
In today’s fast-paced world, making well-informed decisions is crucial for business success. Markets fluctuate, customer behaviors shift, and countless external factors can impact outcomes in unpredictable ways. To navigate this complexity, businesses are increasingly turning to the Monte Carlo method — a sophisticated statistical technique originally developed in nuclear physics. By using random numbers from defined ranges and behaviors, like the normal distribution with its given parameters (mean and standard deviation), to simulate a wide range of possible scenarios, Monte Carlo simulations provide a robust framework for assessing risk, optimizing processes, and ultimately, enhancing decision-making in an unpredictable world.
In this article, we'll guide you through the basics of Monte Carlo simulations, help you understand the core concepts, walk through a Monte Carlo example model, and discuss how to make Monte Carlo simulation work for your business.
Monte Carlo simulations are a class of computational algorithms that rely on repeated random sampling to obtain numerical results. Essentially, they model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. The term "Monte Carlo" is derived from the famous casino in Monaco, symbolizing the element of change involved in these calculations.
The process works by creating a mathematical model of a system or process, then running multiple simulations (often thousands or millions) with random input variables to see the range of possible outcomes. The results provide a comprehensive picture of the risks and uncertainties involved which can help businesses make more informed decisions. This approach also lets you conduct sensitivity analysis of your inputs to help determine which ones have the most impact on your results.
Several industries have successfully integrated Monte Carlo simulations into their decision-making processes.
Adopting Monte Carlo simulations in your business requires a combination of the right tools, expertise, and a clear understanding of your business objectives. Here’s how you can start:
Monte Carlo simulations are a powerful tool that can transform the way businesses approach risk and decision-making. By providing a comprehensive view of possible outcomes, they enable companies to make more informed, data-driven decisions.
Whether you're looking to optimize costs, manage risk, or improve project management, incorporating Monte Carlo simulations into your business processes can lead to more strategic and successful outcomes. In an increasingly uncertain world, this approach offers a significant competitive advantage. Use @RISK and change your business results!
We are excited to announce the latest version of DecisionTools Suite – designed with an unparalleled performance boost (4x faster calculations!) and a new, intuitive interface to help you amplify your insights.
Time is of the essence, and creating and re-creating a calendar or waiting on long calculation times can distract you from more important priorities. That’s why our latest updates are focused on eliminating these challenges – allowing you to work more efficiently and effectively.
Continue reading to learn more about DecisionTools Suite or request a demo to see for yourself!
See DecisionTools Suite in Action
With new peak performance enhancements and an efficient one-click calendar re-import, you can quickly check off tedious tasks and focus on strategic planning and execution.
While these enhancements are specific to Evolver and ScheduleRiskAnalysis (SRA), they work in harmony with the entire DecisionTools Suite in Microsoft Excel to help you structure complex decisions, make data-driven predictions, allocate limited resources, and make critical decisions with confidence.
What's New in Evolver and ScheduleRiskAnalysis?
Your time is a valuable, limited resource, so we’ve made it our mission to help you do more, faster. Upgrade to the latest version of DecisionTools Suite to take full advantage of these enhanced features in Evolver and ScheduleRiskAnalysis and start adding hours back into your day.
New to powerful risk and decision analysis? Request a free demo to see the new Evolver and ScheduleRiskAnalysis in action!
When building a manufacturing facility, a reliable and resilient power source is key. As social and governmental pressure to decarbonize the manufacturing sector intensifies, more companies are moving away from fossil fuel-powered plants – for example, a 2024 article in Automotive Manufacturing Solutions notes that major automobile manufacturers in Europe, Asia, and North America “continue to innovate and adopt green energy practices,” such as plants powered in whole or in part by solar, wind, and hydroelectric energy.
In addition to addressing sustainability, renewable energy can also result in more cost-effective operation in a time of global energy supply turbulence. A March 2024 article in the Engineering Science & Technology Journal observes that “[t]he integration of renewable energy into the manufacturing sector is not just a step towards mitigating environmental impact but also a strategic move that opens up a wealth of opportunities for innovation, competitiveness, and growth.”
However, making the switch to sustainable energy sources comes with its own set of risk factors. In a recent Lumivero webinar, “Illuminating Probabilistic Risk in Renewable Energy,” energy industry consultant Manuel Carmona walked attendees through modeling methods that can help evaluate the different types of operational and financial risks for renewable energy products in manufacturing. In this case study, we’ll discuss the highlights of Carmona’s presentation, define and describe how to use Monte Carlo simulation, and present the risk modeling examples used to make better decisions in the field of renewable energy.
Manuel Carmona is a certified Project Management Institute Risk Management Professional (PMI-RMP) with more than 25 years of experience in managing projects within the energy and technology sectors. As a trainer with EdyTraining, he has helped manufacturers utilize @RISK and XLSTAT to run Monte Carlo simulations using Excel – simulations that can be used for various types of probabilistic analysis.
Probabilistic analyses can be used to answer a wide range of questions raised at the outset of a renewable energy project, including:
To generate these analyses, Carmona recommends building models using @RISK by Lumivero, a probabilistic risk analysis tool that lets you create Monte Carlo simulations while using Excel spreadsheets.
Monte Carlo simulation is a statistical analysis technique first developed by scientists working on the Manhattan Project during World War II. It’s used to create probabilistic forecasts that account for risk and random chance within complex systems. Finance, meteorology, insurance and defense are just a few of the industry sectors that make use of Monte Carlo simulations to inform decision making.
Powered by software such as @RISK, Monte Carlo simulation can quickly generate thousands of simulations using random numbers that account for a wide range of variables, generating many different outcomes along with the probability of their occurrence.
Creating probabilistic analysis models with a Monte Carlo add-in for Microsoft Excel is typically a simple process that generates a complete range of possible values as opposed to traditional deterministic modelling techniques.
Most analysts use single-point estimates (also known as mean values or most likely values) for their estimations, then perform a series of best- and worst-case scenario calculations using formulas to determine the impact of a specific variable on a project.
For example, an analyst might begin their calculations by setting the cost of building an energy plant as high as estimates indicate it will go, generate an output, and then work in increments to gradually define potential impacts of a project. Manually adjusting the parameters for each calculation allows for refinement of the outcomes, but it cannot produce a complete range of potential outcomes.
With Monte Carlo simulation, analysts can develop comprehensive risk analyses more quickly – analyses that project risk into the future to determine whether an investment is worth making. These analyses can also be adjusted to model many different types of risk or uncertainty including cost assessments across the life of a project.
With @RISK, project managers can build models and run thousands of simulations under different scenarios – allowing them to quickly model probabilities across a wide range of variables. Plus, the interface allows for rapid generation of graphics that help stakeholders visualize findings. Options include tornado graphs showing advanced sensitivity analysis, stress testing, scatter plots, and more.
Carmona notes that because @RISK integrates with Microsoft Excel, creating a probabilistic analysis is as simple as selecting any other Excel pre-programmed function – making the creation of models a straightforward process. By integrating these various uncertainties into a comprehensive @RISK model, project managers can perform Monte Carlo simulations, running thousands of iterations to assess a project's financial performance under different conditions and scenarios.
This approach provides valuable insights to project stakeholders into the range of possible outcomes, the probability of meeting certain financial targets, and the identification of critical risk factors that may significantly impact the project's success and objectives.
Carmona demonstrated how @RISK could be used to analyze uncertainties and costs for building a renewable power plant for a manufacturing facility. The model plant would utilize solar panels and wind turbines to generate energy and would need to reliably produce eight to 12 megawatts (mWh) of energy per day.
For the purposes of this exercise, Carmona assumed that the plant was well-sited and that its solar panels and turbines were appropriately sized. The first question to answer was: based on probabilistic analysis, how much power would the plant usually generate in a given day?
To begin answering this question, it was necessary to develop models that incorporated different types of uncertainty. The analysis began by looking at the solar plant. Three variables that could impact energy generation include:
Using power output data from the solar panel manufacturer and weather data for the city in which the plant was to be built (Madrid, Spain), Carmona used @RISK to generate a distribution curve for power output based on solar irradiation. On a completely cloudless day, the plant could be expected to produce 12–13 mWh of power during daylight hours. Given typical weather conditions at the site, what would the power output of the plant most likely be?
Carmona used the @RISK add-in to simulate a dynamic environment with cloud cover that changed throughout the day with Monte Carlo simulation using random variables. Before running the simulation, he defined the cloud cover using a standard distribution. This required some adjustment to ensure that the model did not generate cloud cover values greater than 100%.
Cloud cover was not the only variable to account for, however. Temperature impact the power output of a solar cell as well – the higher the temperatures, the lower the output. While a perfectly cloudless day should result in maximum power output, an exceptionally hot day can actually impair generation. The model therefore needed to account for temperature correction.
Since @RISK utilizes the Latin hypercube method, a statistical sampling technique that cuts down on computer processing time, Carmona was able to quickly run 100,000 Monte Carlo simulations. A plot of the results was generated, resulting in a probability distribution curve for the daily production of electricity. The simulation projects that on 47.9% of days, energy production would be within the desired range between 11 and 13 mWh.
The next variable to account for was equipment. Power generated by renewable energy means still needs to travel from where it is generated to the facility it needs to power, and sometimes external equipment, such as electrical transformers, may fail. There are also other types of failure events to consider such as damage to solar panels from hailstorms, module soiling, and vegetation overgrowth. The next stage of the probabilistic analysis process was to model for these types of external failure events.
Carmona allowed for four failure events – one that happens every 250 days of operation, one that happens every 500 days, 750 days, or 1,000 days. In every instance, a failure event means that the plant does not provide any activity for the rest of the day.
Using @RISK, Carmona ran a second series of simulations that accounted for external failure events, then generated a new distribution graph to show how these new variables could further impact the probability that the plant would produce enough power on a given day. The red bars indicate the original simulation; the blue the second simulation. This second visualization clearly shows that failure events were likely to reduce the probability of the plant producing enough energy by 3%.
The model shows that on most days the solar plant would be able to produce around 9 MWh of power on most days.
So far, the probabilistic models indicated that the solar plant would not be able to produce enough electricity to meet demand on most days. However, adding wind generation would allow the factory to charge storage overnight when the plant was not producing.
Modeling the power outputs for the wind plants followed a similar process to modeling solar output. This process included gathering weather data about average wind speeds in the area where the plant was to be built and manufacturer data about how much power the turbines generate at a given wind speed. The chart below shows what percentage of output the wind turbines generate (Y-axis) given a specific wind speed (X-axis). Note that faster wind speeds will actually hinder power generation after a certain point, just as a day that is sunny but also very hot will reduce the effectiveness of solar panels.
After running simulations for the wind plant, Carmona was able to demonstrate that the combination of both generation methods – wind and solar – had a high probability of meeting electricity demand on most days.
That accounts for the actual power generation issues. What about the costs of operating and maintaining the plant?
Carmona decided to conduct a NPV analysis comparing the lifetime cost of operating the renewable energy plant with performance-monitoring software involved. Without the monitoring software, the plant performance would be lower than with the monitoring software. With the software, the plant would produce approximately 6% more energy. Does licensing and operating the monitoring software result in actual saving over time?
The table below was used to generate a 13-year forecast that also accounted for estimated plant inspection and maintenance costs, which would take place every three to four years.
Then, Monte Carlo analysis was performed to generate an average NPV. This showed that the average NPV of the plant without the monitoring software would be €134,000, while the average NPV with the monitoring software would be approximately €169,000 over the same period.
The result: running the plant with monitoring software would result in an average savings of €35,300.
What about the risks and costs involved with building the plant? Fortunately, DecisionTools Suite’s ScheduleRiskAnalysis allows project managers to assess time and cost uncertainty. The program can import details of projects that have been scheduled in either of two popular project management tools: Microsoft Project or Primavera P6 from Oracle. Project managers can use @RISK to import their project schedules into Excel and carry out Monte Carlo simulation to determine the impact of construction delays or cost overruns.
For renewable energy projects, @RISK empowers project managers and decision makers to make informed choices by generating Monte Carlo distributions in Excel. From determining power output to evaluating the value of investing in add-ons like monitoring software, @RISK can help you develop robust probabilistic analyses for a variety of risks and provide clear visualizations of results.
Find out how you can generate better risk analyses for your renewable energy projects – or any other projects – within Microsoft Excel. Request a free trial of @RISK today. You can also watch the full webinar on-demand!
SOURCES
Energy Risk Modelling, Roy Nersesian, Ed. Palisade.
Manuel Carmona, Edytraining Ltd.
This webinar will explore the application of @RISK software in the oil and gas industry, specifically focusing on modeling and forecasting crude oil prices for Exploration & Production (E&P) project valuation.
In part two of the Six Sigma webinar series, we will demonstrate three practical models that will give competitive advantage to any production or design process within a manufacturing environment. These models can equally be applied to processes within the service industry.