AgustaWestland designs, develops and manufactures helicopters. The process for bringing a new product from initial idea to market is long and complex, and therefore involves many uncertainties. The company uses @RISK to develop financial business cases and feasibility studies to produce in-depth analysis so that the senior management team can make informed decisions about which products to develop.
AgustaWestland is an Anglo-Italian helicopter company owned by Italy’s Finmeccanica. It provides rotorcraft systems design, development, production and integration capabilities, along with in-depth training and customer support, to military and commercial operators around the world, offering the widest range of advanced rotorcraft available.
The Risk Assessment, Feasibility Analysis and Business Cases department is responsible for supporting the decision-making process for key company initiatives and opportunities from the initial stages of verifying their feasibility. This requires the development of structured financial business cases to justify investment against economic returns, and to monitor results.
The department develops AgustaWestland’s risk assessment methodology, procedures and tools for the new products in line with international best practices. It then ensures that these are applied and used consistently to determine all possible outcomes when the company is evaluating opportunities.
New helicopters require large investments in order to design, develop, test, certify and bring the product to market – a process that can last three to five years. The Risk Assessment, Feasibility Analysis and Business Cases team uses @RISK (part of the DecisionTools Suite risk and decision analysis toolkit), to undertake risk analysis to determine the financial feasibility of developing any new product, preparing a financial business case for approval by the company and its shareholders.
Previously the department worked only with what it calls a ‘deterministic’ Excel model. The model’s inputs include: non-recurring costs such as engineering studies for the design and development of the new product; prototype manufacture; flight tests; and certification. Other inputs considered include: the number of helicopters planned for manufacture per annum over a 20-year period; recurring helicopter unit costs per system / subsystem; unit prices for different helicopter configurations; the elasticity curve (to show how a change in price affects demand); and the spare parts business model (the purchaser of a new helicopter will also need to buy replacement parts during its lifecycle, estimated at around 20 years, and this is built in to profitability and cash flow). Financial parameters such as inflation, the weighted average cost of capital, bank interest rates, and exchange rates in different currencies are also accounted for.
The deterministic model provides financial outputs such as revenue, Earnings Before Interest and Taxes (EBIT), net profit, Net Present Value (NPV), Internal Rate of Return (IRR), financial break-even, etc., and shows how these vary depending on specific input values.
The deterministic model can help to predict a single set of results for the main model outputs. However the economic situation cannot be predicted with any great accuracy, especially when the business cases are based on a period of 20 years. In these situations, after evaluating the achievable results with the deterministic model, it is crucial to also take uncertainty into account. This makes it possible to evaluate in advance how changes to the inputs will impact key financial outputs and, with this insight, implement mitigation actions.
@RISK allows the team at AgustaWestland to apply different probability curves, including triangular, pert and normal, to the inputs of the model. The Monte Carlo analysis enabled by @RISK provides a better view of the model itself – i.e. enables them to determine the accuracy of the forecasts and the way to improve the business both in true feasibility and in financial results.
Using the tornado graphs generated by @RISK, AgustaWestland can see which inputs have the greatest effect on the financial outputs, and therefore require more attention. For example, specific discussions with the engineering department to determine mitigating actions could potentially keep recurring costs under control if the tornado graphs show that these have the greatest impact on the financial outputs presented in the business case.
“Working with such long timeframes is a key challenge because it is not possible to know for certain many of the parameters (inputs) that we use to determine the financial business case of a new product,” explains Vittorio Maestro, Head of Risk Assessment, Feasibility Analysis and Business Cases at AgustaWestland.
“Our use of the risk analysis element of Palisade’s DecisionTools Suite has enhanced our ability to assess, control and drive company decisions. We can now focus on the key activities that enable us to pursue the best product within the most appropriate financial timeframe,” adds his colleague Francesca Schiezzari, who uses @RISK to build similar financial business cases for a variety of company projects, including a Helicopter Training Centre and a Logistics Support Centre.
Key software features useful to AgustaWestland:
Examples of output graphs:
Examples of input graphs:
Infectious disease is an important cause of lost production and profits to beef cow-calf producers each year. Beef producers commonly import new animals into their herds, but often do not properly apply biosecurity tools to economically decrease risk of disease introduction. Dr. Michael Sanderson, a professor of Beef Production and Epidemiology at Kansas State University’s (KSU) College of Veterinary Medicine, wanted to address this issue by developing a risk management tool for veterinarians and beef cow-calf producers to assist in identifying biologically and economically valuable biosecurity practices, using @RISK.
The college was established in 1905, and has granted more than 5,000 Doctor of Veterinary Medicine degrees. Departments within the College of Veterinary Medicine include anatomy and physiology, clinical sciences, diagnostic medicine, and pathobiology. The college's nationally recognized instructional and research programs provide the highest standards of professional education. A rich, varied, and extensive livestock industry in the region, a city with many pets and a zoo, and referrals from surrounding states provide a wealth of clinical material for professional education in veterinary medicine.
Reproductive disease is an important cause of lost production and economic return to beef cow-calf producers, causing estimated losses of $400 to $500 million dollars per year. Because of the complex nature of the production system, the biologically and economically optimal interventions to control disease risk are not always clear. Dr. Sanderson and his team (including Drs. Rebecca Smith and Rodney Jones) utilized @RISK to model the probability and economic costs of disease introduction and the cost and effectiveness of management strategies to decrease that risk.
“For this project, @RISK was essential to model variability and uncertainty in risk for disease introduction and impact following introduction, as well as variability and uncertainty in effectiveness of mitigation strategies,” said Dr. Sanderson. “Further, @RISK was crucial for sensitivity analysis of the most influential inputs to refine the model and to identify the most important management practices to control risk. It was also valuable to aggregate results into probability distributions for risk and economic cost over one-year and ten-year year planning periods.“
The project modelled the risk of introduction of the infectious disease Bovine Viral Diarrhea (BVD) into the herd, the impact of disease on the herd (morbidity, mortality, abortion, culling, lost weight) and economic control costs. These risks were aggregated over ten years to identify the optimal management strategy to minimize cost from BVD accounting for both production costs and control costs.
Probability distributions included:
Target probabilities were utilized to produce the probability of exceeding a certain cost over one and ten years, provide this data as a single number for each management option and generate descending cumulative probability distributions for exceeding any particular cost value.
As a result of the risk identification insight gained from the research, Dr. Sanderson and his team were able to improve disease management and controls by identifying:
“Our utilization of @RISK gave us the ability to account for complex aggregation of inputs and their variability and uncertainty to produce full-outcome probability distributions for more informed decision making,” said Dr. Sanderson. “Further, the ability to use research data from multiple parts of the beef production system and combine those results into a model that accounts for the complexity of the production systems allows recognition of emergent phenomena and decision making based on the full system, rather than only one part. The flexibility to customize outputs provided the most valuable information for decision making.”
The University of Victoria (UVic), a national and international leader in many areas of critical research, participated in a study funded by Health Canada that looked at the human exposure to carcinogens in various demographics. The UVic team used @RISK, Palisade’s risk analysis software, to model the differences in Lifetime Excess Cancer Risk (LECR) for Canadians based on contaminants found in food and beverages. The results revealed notable differences in cancer risks for several different demographics, and are detailed in the thesis, Geographic Exposure and Risk Assessment for Food Contaminants in Canada, by Roslyn Cheasley, a Master’s student with the Department of Geography at UVic.
The University of Victoria is a public research university in British Columbia, Canada. Ranked one of the top 250 universities in the world, UVic is a national and international leader in many areas of critical research, offering students education that is complemented by applied, clinical and work-integrated learning opportunities. The University participated in a study funded by Health Canada that looked at the human exposure to carcinogens in various demographics.
While news headlines regularly report on acute health issues relating to food and beverages, such as E. coli outbreaks and salmonella poisoning, very little is known about the adverse health issues caused by the longer-term intake of contaminants in those foods and beverages – including carcinogens. The CAREX Canada Project, funded by the Canadian Partnership Against Cancer, was launched to better understand the environmental and occupational exposures to substances associated with cancer, and subsequently provide support for exposure reduction strategies and cancer prevention programs. "The goal of the Project was to analyze all publicly available data and build a website that provided local and regional communities with tools to help determine if their geographic areas were at risk," explains Roslyn Cheasley, a Master’s student with the Department of Geography at the University of Victoria. "While the site was launched in 2012, they were concerned that by 2014, the data was already out of date. The University of Victoria made up part of the team that undertook a new study to update the information, and ensure that health officials and other decision makers had all the information they might need to indicate if there could be future health problems."
The UVic team focused on the environmental aspects of the study, looking at potential exposure to carcinogens via air, dust, water, food and beverages. They reviewed 92 different substances that were considered carcinogenic, probably carcinogenic, or potentially carcinogenic. These were then narrowed down to five substances specifically for the food and beverage study: arsenic, benzene, lead, PCB (polychlorinated biphenyls) and PERC (tetrachloroethylene). "Up to this point in time, all analysis had been done from a deterministic point of view, which wasn’t particularly helpful as it didn’t enable us to understand the full range of potential contamination and which populations were more or less at risk," said Cheasley. "We decided to take things up a notch when we updated the data, and upgrade to a probabilistic analysis model based on Monte Carlo simulation. We wanted to estimate the range and frequency of possible daily contaminant intakes for Canadians, as well as associate these intake levels with lifetime excess cancer risk. This is where @RISK came into the equation."
Palisade’s @RISK enabled the team to easily and effectively determine the concentration of carcinogenic elements in the identified food and beverage products, as well as learn if certain demographics were more at risk from dietary patterns than others.
The first challenge to building the new model was pulling together all existing information, as elements of the data were in different formats (e.g. Excel, Access and Stata), as well as in different physical (offline) locations. Then the team had to manage the vast quantity of that information: the resulting 1.5 million rows of data was too much to easily manipulate, sort and manage without corrupting the results.
The next challenge related to the data for the food and beverage types. The team had analyzed the dietary patterns of approximately 35 thousand Canadians, using three different categories: geographic location, gender and income levels. They’d also identified 60 whole foods for the model, from eight food groups: meat, fish, dairy, fruit, vegetables, rice/cereals, grain/nuts and beverages. However, the data for these specified foods came from three different sources, with each using a different form of measurement. According to Cheasley, “The problem we had was how to bring all of these components together in a way that would provide a comprehensive but usable outcome. We needed to be able to filter the data into different dietary patterns as well as different demographics, then marry it each time with the five different carcinogenic substances."
Palisade's @RISK software solved these problems, enabling the team to use PERT distributions to easily determine the minimum, mean and maximum concentration of the five carcinogenic elements in the identified food and beverage products. They were also able to see the output of the different dietary patterns and determine if certain demographics were more at risk than others. “I really appreciated how easy @RISK was to use – I didn’t need to be a statistician to understand it," said Cheasley. "Plus I loved the instantaneous flexibility. If I needed to run a new simulation, the results were immediately visible – and easily understandable – in a graph or chart.” For this study, each of the 125 different simulations was run 50 thousand times, to ensure the most accurate results (and smoothest possible graphs).
The outputs of the @RISK model revealed to the UVic team that of the five tested contaminants, arsenic showed the greatest difference between urban and rural estimated Lifetime Excess Cancer Risk (LECR). In addition, LECR was estimated to be higher for men vs. women in Canada for all five contaminants, with an emphasis on males in British Columbia from the dietary intake of arsenic. When based on income level, the model predicted LECR being higher for low and middle incomes from the dietary intake of arsenic, benzene, lead and PERC. However, high-income populations were more likely to have higher LECR from the dietary intake of PCBs.
"I hope that local health officials will be able to use the results of this model to determine if they should do a more detailed study in their own particular regions. For example, what are males eating in British Columbia that impacts their dietary intake of arsenic, and is there a real risk of arsenic in specific foods," added Cheasley.
At first glance, the Fort McMurray Airport in Fort McMurray, Alberta, Canada would appear ordinary; it is a small airport authority, comprising a single runway, small terminal and serving a rural region removed from major metropolitan centers. However, this airport is the gateway to Canada’s oilsands and faces a number of unique challenges related to servicing one of the largest industrial construction projects in the world today. The unique challenges of the Fort McMurray airport include: unprecedented passenger growth, staffing constraints, infrastructure constraints, pressures from various stakeholder groups, the introduction of daily international flights, shifts in politics and even risks posed by the potential development of oil reserves beneath the airport site. Thus, the Fort McMurray Airport Authority (FMAA) turned to Revay and Associates Ltd. to assess the potential risks they face as an organization as they try to keep pace with the growth of the region, and Revay turned to @RISK to help with this analysis.
Dr. Mark Krahn, consultant at Revay and Associates, knew that the Fort McMurray Airport (FMA) was an unusual case when the Revay risk team tackled it. “It’s a small town airport in a city that has doubled in population in the past decade.” This jump in population is thanks to the Athabasca Oilsands, the second-largest oil deposit in the world after Saudi Arabia. These oilsands represent recoverable reserves of 170 billion barrels and estimated total reserves of 1.8 trillion barrels of this essential energy source. Most major oil companies, along with the accompanying industries and contractors, have rushed to take advantage of this opportunity. As a result, Fort McMurray has become a boomtown with skyrocketing house prices, low unemployment rates, accommodation shortages and high salaries.
The oil boom has accelerated air traffic into the area through increased “fly-in-fly-out” traffic of camp-based workers as well as the increasing local traffic. As a result, the FMA is the fastest growing airport in Canada. In 2012, the FMA had a record passenger throughput of approximately 1 million passengers. With an annual capacity of only 250,000 passengers in the existing terminal, the FMA desperately needed to expand. Plans for a new terminal, with a capacity of 1.5M passengers began in 2010. The new $250M terminal is currently wrapping up construction and is set to open in Spring 2014.
Due to the unique context of the FMA and its future direction, a number of risk factors needed to be considered around the expansion project, as well as around the success of the organization as a whole.
Revay and Associates Ltd. is a consulting firm that specializes in risk management, contract strategy, conflict management and overall project management. Initially Revay was engaged to lead the project risk assessment for the FMA expansion project.
The project risk assessment was focused specifically on the capital cost and schedule uncertainty of the new terminal construction project. Subsequently, Revay was asked to lead the enterprise risk management (ERM) assessment of the FMA. According to Krahn, formal ERM is a relatively new management discipline that has evolved over the past few years and includes methods and processes used by organizations to manage risks and seize opportunities related to the achievement of their objectives and corporate strategy. “ERM is much broader than project risk,” says Krahn. “Clients must first identify what the strategy of the organization is, what their mission is, and what their key success drivers (KSDs) and objectives are. In order to assess the risk, it is imperative that the organization be clear on what these KSDs and objectives are, and then we can determine the risks impacting their success.”
The identified enterprise risks and opportunities are often categorized according to a number of key areas, including:
*Operational risk *Reputational risk *Strategic risk *Personnel Safety and Health risk *Financial risk *Environmental / Containment risk *Productivity / Morale risk
“One of the biggest challenges of ERM is the many different categories or ‘buckets of risk’, says Krahn. “Senior Management needs to understand what the top overall risks are in order to implement effective mitigation actions and to understand the overall risk exposure. This poses an apples-to-oranges conundrum, as there may be several high-level risks in various categories making it difficult to draw comparisons between risks. Being able to compare risks across the different categories is critical to understanding what the organization’s top risks truly are and to focus the organization in mitigating the risk.”
To address this dilemma, Revay’s approach to the ERM assessment at FMAA had two novel aspects:
1. Martin Gough, Revay’s Risk Practice Lead, developed a methodology to allow for direct comparison between risks of different categories. Although risks could still be classified in various categories, a common impact currency, termed “Utils” or Utility, was used in the risk evaluation to allow for direct comparison between risks. 2. Rather than the more limited descriptive or qualitative nature of ERM, Revay applied quantitative techniques using @RISK to decipher more informative probabilistic risk details.
Director at B. Riley Advisory Services
The FMAA has a strong leadership team and had developed a comprehensive strategic plan prior to completing the ERM assessment, including; vision, mission, values, Key Success Drivers (KSDs) and 5-year rolling goals. A collateral document, the annual Corporate Business Plan, outlines one-year corporate objectives and performance indicators. FMAA has identified four key KSDs and each is assigned a weight (percentage) factor:
*Optimized Customer Experience (40%) *To Lead a High Performing Airport Team (25%) *To Achieve Environmentally Responsible, Sustainable and Profitable Growth (20%) *To Foster Effective Stakeholder Relationships (15%)
Each of the four KSD areas has a series of specific and related objectives, each objective with its own sub-weighting.
In order to assess the enterprise risk and opportunity around each of the KSDs and their specific objectives, Revay facilitated off-site workshops with attendance by all of the key stakeholder groups, including the FMAA board of directors, FMAA administration and operations personnel, local government, provincial government, airlines, funders, insurer, various FMAA consultants, and representation from the expansion project management team. “Having good representation from all key stakeholder groups is critical to the success of the ERM assessment,” Krahn explains.
As part of the workshop, Revay presented and developed the risk scoring matrix with the attendees. By doing so, the workshop attendees had direct input into the matrix and learned how it was to be applied in the risk evaluation to ensure consistency of process. As this was a non-standard application methodology, this approach proved to be invaluable.
There are two variables that are to be evaluated for each risk and opportunity: probability and impact. However, in this application of ERM the impact is on a single scale as measured by Utils, rather than the various risk category impact descriptors as is common in traditional ERM. Each KSD area was provided with an initial credit of 10,000 Utils. In the workshop the teams then protected this balance through risk assessment, reduction and mitigation and improved the balance through opportunity identification and capitalization planning. The percent or Util impact evaluation is the direct reduction to the percent weighting of the individual Corporate Objectives and KSDs. Those risks with a higher detrimental impact to the objectives will be scored with a higher impact than those with a lower detrimental impact.
@RISK was used to model the uncertainty in both the probability scale and impact scale as determined by the scoring of each risk / opportunity. Instead of a single probability x impact result for each identified risk, @RISK allowed for the probabilistic range of outcomes to be determined for each identified risk (S-curve). Revay applied the “trigen” distribution (modified triangular) to model the range of both probability and impact for each risk. This quantitative information is much more informative for comparing risks, comparing KSDs, comparing categories of risk and assessing the overall ERM risk register.
There are two key results that come from this unique approach to ERM. First, using quantitative Monte Carlo assessment allows for probabilistic risk results. This information is important as it ties the uncertainty to the probability / confidence level. In addition to the mean (or P50) result, the entire range of results is known with the associated probability values (i.e. 90% or 10% confidence level etc.). Second, assessing the risk on a single Util scale allows for direct comparison between risks and between KSDs. This is critical as it allows management to focus on risk mitigation and response actions for those risks that are the highest rated overall.
1. There are two figures that are typical outputs of this type of assessment: Probabilistic S-curve showing the risk profile (pre- vs. post-mitigation action) vs. probability value (percent confidence). This comparison shows the benefit of risk mitigation.
2. Risk dashboard created by Revay to track and communicate the change in risk profile at quarterly intervals. This high-level view quickly allows an understanding of the risk trends, areas of highest concern and impact of mitigation.
In conclusion, the key benefits of this ERM methodology used to assess the risk around FMAA’s corporate strategy are (a) the confidence and understanding around the top individual risks impacting the organization, (b) an awareness of the top objectives and KSDs that are at highest risk, (c) an understanding of the risk trends over time, with and without mitigation, and in a probabilistic manner, and (d) a dynamic process that can be easily adjusted as an organization changes course in tune with the external environment in which it exists. Only through the use of @RISK and applying novel quantitative techniques was this achievable.
Patrick Engineering is a nationwide U.S. engineering, design, and project management firm with a long history of success on a variety of complex infrastructure projects. Clients include key government agencies, private and public utilities and FORTUNE 500 companies in a broad range of industries. Patrick focuses on providing concept planning, engineering, pre-construction services, procurement of materials and construction management of heavy infrastructure projects. This work is accomplished with technical experts in the fields of civil, structural, hydraulic, environmental, geotechnical, electrical engineering, relay and protection, geology, surveying, construction management, process control and GIS.
Patrick provides independent cost estimating, scheduling, and risk analysis services for the Massachusetts Bay Transportation Authority (MBTA) as the basis for budgeting and reserving funds for future requirements. Palisade’s @RISK software is used exclusively for program quantitative risk analysis. Estimates of cost and schedule duration are uncertain values, as the exact or discreet values are not known until complete. Given the inherent uncertainty of estimates, the best way to display them is with probability distributions of possible cost and schedule durations. The best application for this is statistical modelling, and Patrick uses Palisade’s @RISK software to assess cost and schedule contingency needs based on project or program risks.
The Downtown Crossing Vertical Upgrade project was part of Massachusetts Bay Transportation Authority’s (MBTA) major elevator upgrade program to replace outdated and small elevators, as well as add new ones to meet accessibility requirements of the Americans with Disabilities Act. During design, several project risks were identified, such as subsurface conditions, pedestrian access and licensing agreements. According to Kim Kozak, Sr. Project Manager for Patrick Engineering, risk identification is just the first step in the determination of risk contingency. The development of the construction cost estimate and project schedule are the foundation of the risk model to which risks are linked. In addition to construction costs, other costs are factored, such as those attributed to client related costs and escalation.
“To do this, you need to work collaboratively with the client, with the designers, with the project management team, with anyone who will have insight into the project – you want them all involved,” explains Kozak. The full team for the Downtown Crossing station project included six organizations and approximately 20 team members, working with MBTA Project Controls led by Horace Cooper.
Patrick Engineering takes a unique approach to these types of complex projects: once they have the construction cost estimate and schedule, they hold a Quantitative Risk Workshop. This workshop is a collaborative brainstorming exercise that enables everyone involved to talk through specific elements of the project, as well as capture the impacts and probabilities of those impacts occurring. “We try to get people thinking, to encourage discussion, as even a small detail could lead to a giant risk,” said Kozak. “Similar to the situation with the Titanic – what seems like a minor detail could be enough to sink the entire ship.”
Through the Workshop process, the team is able to capture enough details to enable Patrick Engineering to build a risk model based on the minimum, most likely and maximum values, that provides the minimum and maximum value of those risk impacts, and assign appropriate levels of probability for those risks. The company uses Palisade’s @RISK software to assess project contingencies based on the cost and schedule risks, using Pert and Triangle distribution models. As each project has unique risks, the input for their models is determined on a per-project basis. This input can include:
Kozak added, “Even if you’ve gone through the list of everything you can visualize on the drawings, there are still ‘unknown’ unknowns – you don’t know what you don’t know. So we included risks for the unknowns with the appropriate level of probability and impact as a percentage of schedule and cost elements for the Downtown Crossing Burnham Elevator project.”
While project contingency risks are typically captured as a percentage of total project costs, @RISK enables Patrick Engineering to assess contingency requirements at the appropriate level of risk. The software outputs provide them with a build-up of costs, including the level of accuracy of the estimate, as well as cost and schedule risks.
“The final results of Palisade’s @RISK models help us, and our clients, understand where projects could go – not where they will go,” explained Kozak. “And we know that if a risk does occur, we’re well prepared as we’ve already identified it.”
In reliability engineering, “too little, too late” is not an option, especially in some industrial process facilities where asset failures can have significant consequences –financial and otherwise. Reliability engineers are now turning to data analysis and Monte Carlo simulation to predict and prevent failures.
San Antonio, Texas-based Zachry Industrial is an engineering and construction firm serving the refining, chemical, power, and pulp and paper industries. The Maintenance and Reliability Services Department at Zachry supports clients in the reliability, availability, and maintenance of their plant assets.
Over the years, Zachry reliability engineers working in client process plants have observed that failure rates can abruptly increase due to unintended and unrecognized changes in repair quality, or operating or process conditions. The changed failure rates were slow to be recognized, often only after several additional failures had occurred. Zachry engineers then undertook to recognize failure rate trends at the very earliest time, so that negative failure trends could be immediately turned around. All new failures trigger an automatic analysis so that results with statistical significance are known prior to repair. This permits the data analysis to influence inspection and repair plans. This immediate and selective intervention of reliability degradation allows elimination of failures that otherwise would occur.
Conventional failure time analysis methods are slow to detect abrupt shifts in failure rates and require larger datasets than what are often available, so new methods were developed. One method uses Poisson distribution in reverse to identify failure times that do not fit the distribution. Probability values (p-values) quantify the likelihood that failure times are unusual relative to random variation. Failure times with good statistical strength of evidence are “alarmed” as issues of interest.
“As new data arrives, analysis is done contemporaneously, not because there is a problem of interest, but to see if there may be a problem of interest,” said Kevin Bordelon, Senior Director of Operations at Zachry Industrial.
Data for trend detection is drawn from asset work order data for individual assets residing in a client’s Enterprise Resource Planning systems. These assets can range from large machines to small instruments, so there can be tens of thousands of individual assets within a manufacturing plant. While the total database is huge, each individual asset dataset can be extremely small. Traditional methods either can’t be used on very small datasets, or confidence intervals are so wide as to make them unreliable. (Confidence intervals show how wrong a value may be).
Every maintenance action request for a particular asset triggers extraction of historical data for that asset. Using prior maintenance action dates, Poisson p-values are automatically generated. The idealized “textbook” expectation for repairable equipment is such that the Poisson distribution characterizes failure counts over time intervals. This textbook expectation is exactly the opposite of what is needed to identify the unusual special cause failures of interest, so the Poisson is used in reverse as a null hypothesis distribution. Low Poisson p-values suggest failure times are unlikely to be random variation from the textbook expectation; therefore, they are likely to be special cause failures that should be investigated.
“Using datasets much smaller than normal and using unfamiliar data analysis methods make having a statistically sound measure of confidence in the results absolutely essential. This is done by forming probability distributions around the p-values. These distributions show the variation in the p-value if the equipment’s reliability condition could be resampled thousands of times. Of course this is physically impossible, but is easily done by computer simulation through @RISK.” explains Bordelon. “Combining Excel’s flexibility with @RISK allows a single @RISK simulation to simultaneously generate probability distributions around all of the numerous p-values.”
Failure times (dates) are used to determine the time between failures (TBF). Each failure has a TBF value that defines that failure. Upon each failure, that TBF produces a p-v1 value. This is the probability of one or more failures occurring in the TBF interval when the mean time between failure (MTBF) is determined from all prior TBF values. A look back of two TBF values determines a p-v2 value. This is the probability of two or more failures occurring with an interval of time equal to the sum of the last two TBF values. Upon each failure, this process is repeated for all prior TBF values, producing a probability map as seen in the figure (below).
The probability map is progressively developed with each failure. Upon each failure, the p-values are calculated using only the then existing data, being blind to future data. This allows developing trends to be seen as failures are experienced. Missed opportunity to take action also stands out as significant probability values can be traced backward in time. For example in the figure below, the last four failures had continually smaller and smaller p-values, indicating increasing strength of statistical evidence of reliability degradation. With only the data that existed upon the 10th failure, the p-v1 suggests the one quick failure is unlikely to be random variation and is likely to be a special cause failure that should be investigated. Had this been done, the next three failures could have been avoided.
“To say that a single fast-acting failure establishes a statistically significant trend is unprecedented and demands confidence limits around the measurement. But @RISK solves the confidence problem by efficiently developing probability distributions for every p-value in a single simulation,” Kevin Bordelon says. “The methods for obtaining confidence levels around the probability values are inconceivable without @RISK, and to accept the new analysis methods you have to have confidence in the results.”
CONTI-Group has been a leader in the shipping industry since 1970. In addition to operating a large, deep-sea fleet in Germany, the company is an established global provider of ship investment funds. After the European Union introduced regulatory requirements for better visibility into capital market investments, CONTI began using Palisade’s @RISK to construct financial models that capture and assess potential risks in the fluctuating shipping market. This has enabled the company to provide higher levels of transparency to its investors, successfully steer its shipping fleet, and subsequently generate above-average results on its investments.
Commercial shipping is one of the most efficient modes of transport for cargo, both economically and ecologically. Tangible shipping assets, including tankers and carrier vessels, represent an important form of financial investment. However, after the financial crisis of 2007-2008, the European Union introduced new requirements for better regulation and supervision of the financial sector. The objective of these regulations was to introduce a new level of protection for investors by appropriately regulating and supervising all markets, and included the provision of quality and transparency in all capital market investments.
In the past, CONTI would construct models manually in Excel, which took significantly more time and effort. With the @RISK software, they are able to easily build the models, then provide high-level visualizations of the Monte Carlo simulations in a format that is easily understood. The resulting output provides CONTI with the probability of the expected value for a new investment fund. The company typically uses PERT distributions to determine the minimum, mean and maximum probability of each return rate, from 0% (or bankruptcy) to 100% return on investment (ROI). CONTI has been commended by leading investment analyst firms, receiving multiple awards for its transparency and performance. In addition, the company has returned strong dividends for more than 50 funds. "Shipping is a future-oriented market that offers lucrative investment opportunities. However, there are also many unknowns," said Stobinski. "Palisade's @RISK software helps us identify challenges over the lifetime of a project, then evaluate the decisions we’ll need to make and future actions we’ll need to take for the good of the company and our shareholders."
With @RISK, CONTI provides industry-leading levels of transparency for its investors, as well as for BaFin. At the same time, the Palisade software provides the company with a risk management solution that enables it to quickly identify any potential changes in the markets, including the possible effects, and better monitor and manage its investment funds.
Building a model for a new investment fund comprises a wide range of factors. CONTI takes into consideration the full range of costs associated with building or buying and then insuring a vessel. They then consider the potential costs and revenues generated with the charter of that vessel, which is impacted by the type of ship, the duration of the potential charter, and the freight rate. “The type of vessel is very important. At the moment, a ‘bulker’ is typically used for a short-term, single journey, while a container vessel is typically a longer-term charter, sometimes for up to 10 years. This provides a higher hire rate and higher income,” said Stobinski.
CONTI also analyzes the operating expenses associated with running the vessel, including marine insurance and labor and fuel costs. Currency exchange rates, interest rates and amortization rates are also included in the model, as fluctuations in these rates can have a significant impact on the potential return on investment. “We need to consider all the factors that influence the profit or loss associated with each project. While the vessel hire rate is important, we also need to consider how much it will cost us to man the crew of the vessel and how much it will cost to insure the vessel, as well as how much interest we will have to pay the bank for any loans,” added Stobinski. For a typical model, CONTI inputs data for more than 30 different variables.
Managing Director for Risk Management, Compliance and Asset Valuation, CONTI KVG
(E = expected value, STD = standard deviation, "Prozent IRR < 0" = probability of default)
(E = expected value, STD = standard deviation, "Prozent IRR < 0" = probability of default)
In the past, CONTI would construct models manually in Excel, which took significantly more time and effort. With the @RISK software, they are able to easily build the models, then provide high-level visualizations of the Monte Carlo simulations in a format that is easily understood.
The resulting output provides CONTI with the probability of the expected value for a new investment fund. The company typically uses PERT distributions to determine the minimum, mean and maximum probability of each return rate, from 0% (or bankruptcy) to 100% return on investment (ROI).
CONTI has been commended by leading investment analyst firms, receiving multiple awards for its transparency and performance. In addition, the company has returned strong dividends for more than 50 funds.
"Shipping is a future-oriented market that offers lucrative investment opportunities. However, there are also many unknowns," said Stobinski. "Palisade's @RISK software helps us identify challenges over the lifetime of a project, then evaluate the decisions we’ll need to make and future actions we’ll need to take for the good of the company and our shareholders."
Palisade’s risk analysis software @RISK is being used by aquatic veterinary surgeons to demonstrate the practice of biosecurity to aquatic farmers. The method helps to reduce the potential for disease in animals without incurring the significant costs of extensive testing. Only a small number of data inputs are required for @RISK, with thousands of simulations then presenting accurate results that inform decision-making.
It is estimated that the human population will be nine billion by 2030. The Food and Agriculture Organization (FAO) believes that aquaculture, which currently provides around half of the fish and shellfish eaten around the world, is the only agricultural industry with the potential to meet the protein requirements of this population. However, one of the biggest constraints to achieving this is the depletion of stock levels through disease. In 1997, the World Bank estimated that annual losses amounted to $3 billion, and current figures suggest that 40 percent of insured losses are due to disease.
Biosecurity measures, which aim to prevent, control and ideally eradicate disease are regarded as essential. However, encouraging the adoption of these practices is often difficult due to the farmers’ levels of education, training, responsibility and perceived economic benefits. In addition, global estimates of disease losses may appear remote and irrelevant to farmers and producers faced with making a rational choice from scarce data and, often scarcer, resources.
Dr Chris Walster is a qualified veterinary surgeon with a long-standing interest in aquatic veterinary medicine, and is the secretary to the World Aquatic Veterinary Medical Association (WAVMA). Having seen Palisade’s risk analysis tool, @RISK, demonstrated, he started using it to calculate the realistic risk of aquatic disease to farms, with a focus on cases where data inputs were limited.
@RISK’s capacity to present the calculations in graphs that are easy to understand also makes it straightforward for vets to show farmers disease risk probabilities. With this information readily available, the cost/benefit of disease prevention can be calculated, and farmers can make informed choices about whether to put controls in place.
For example, a farmer might plan to import 1000 fish to their farm. The cost to accurately determine the disease status of these fish may be uneconomic, but testing a small sample will not give sufficient evidence on which to base an informed purchase decision.
However, testing 30 of the fish and running further simulations using @RISK will give the probability of how many fish might be diseased if more were tested. In other words it provides the farmer with a more accurate picture of the risk connected to purchasing the stock.
If there is no information as to whether the fish carry a disease of interest, testing 30 of them would be expected to return the results that 15 are diseased and 15 are not (a disease prevalence of 0.5 must be assumed, giving a 50/50 probability). However, because tests are rarely 100% accurate, when interpreting a test result, its validity, or how well it performs must also be accounted for. This requires knowing the test characteristics, sensitivity (test positive and truly disease positive) and specificity (test negative and truly disease negative) along with the disease prevalence (or likelihood).
Introducing a sensitivity of 80% for example reduces the fish testing positive to twelve (15 x 0.8). In this case, using a specificity of 98% the simulation is run 10,000 times to produce enough ‘values’, and these are used to produce a graph showing likely minimum, maximum and mean prevalence of the disease.
This simple example helps to generate understanding amongst farmers that they do not need to undertake extensive testing programs to obtain accurate results about disease levels in fish.
Further evidence can be gathered by running more tests that supplement the @RISK distribution graphs with prior knowledge – facts that are already known and accepted. For example, international regulations make it illegal to transport sick animals. Therefore, if a particular disease shows obvious symptoms, it seems reasonable to assume (using human expertise) that the prevalence of the disease is no higher than 10%, or the seller would have noticed that the fish were sick and could not be sold. Once again, only 30 fish are tested, but this time @RISK is used for a PERT distribution with expert opinion introducing a minimum of 1%, most likely of 5% and maximum of 10%. Running the @RISK simulation 10,000 times again to produce significantly more values can change the results significantly.
With this knowledge, the farmer can now decide on the next course of action. They may decide they are happy with the potential risk and buy the fish. Equally they may want more certainty and therefore test more fish or use additional tests. Finally they may feel that the risk is too great and research other sources.
“@RISK enables farmers to reduce the risk of disease spreading amongst their animals whilst minimizing additional costs,” Walster explains: “For aquatic vets, the key is the graphs which allow us to demonstrate a complex probability problem quickly and simply in a way that is easy to understand and trust. These inform decision-making, thereby helping to boost the world’s aquatic stock whilst safeguarding farmers’ livelihoods.”
“This technique also potentially offers an economical method of assisting in the control of many diseases. Farmers undertake their own tests, with each of these providing incremental inputs so that the macro picture can be developed and acted upon,” concludes Walster.
B. Riley Advisory Services helps oil and gas companies with transaction, commercial analytics, litigation, and valuation opinion services. Their expertise is respected in the oil and gas industry, where the unique volatility in their prices can be difficult to handle in these types of services. B. Riley uses probabilistic methods rather than deterministic methods to provide their clients accurate results.
The B. Riley team aided one oil and gas client [anonymously named “OilCo” for this case study] that faced “Chapter 22”—a term for a company that files bankruptcy a second time after 2-3 years. OilCo hoped to avoid Chapter 22 by selling their company to a potential buyer. However, restructuring work can prove difficult for a company with top-line revenue driven by volatile oil and gas prices. B. Riley knew that any assumptions that drive the cash flow models used to restructure an oil company in distress should be simulated. Palisade Corporation software provides the tools to make this happen.
“We relied on @RISK to capture both the variability and volatility in the possible range of outcomes for OilCo,” says said Dan Daitchman, Director at B. Riley Advisory Services. “A single scenario simply would not have accurately captured the economics of OilCo and the industry.”
@RISK’s data driven probabilistic simulations can easily handle post-bankruptcy structures that endure severe price movements. The software’s capabilities can then help the companies, lawyers, and bankers avoid overlevering the company a second time.
The analysts incorporated these variables into their calculations for determining OilCo’s sale price. They estimated the downward price adjustment resulting from the net operations proceeds to be $21.3 million. Other net adjustments reduced the purchase price by $11.75 million, resulting in a purchase price of $205.7 million.
Using the reorganization plan draft, the team estimated that the net proceeds from the settlement of the remaining assets and liabilities through bankruptcy would be $13.5 million, and the net proceeds from the sale of OilCo and the settlement of the remaining assets and liabilities in bankruptcy would be $165.2 million. This recovery could have been reduced to $145.9 million if the title defects were asserted at the maximum amount and the bankruptcy was prolonged for a longer period.
The team next prepared a “Plan B” scenario which compared the bank recoveries under a “hold” (e.g., credit bid) scenario versus the sale scenario discussed above. Plan B assumed the sale of the nonstrategic assets at year-end for $35 million with the proceeds used to pay down the bank debt.
The B. Riley team then performed a review of general and administrative expenses and reduced amounts for a leaner company through outsourcing certain functions. Based upon their review and models, the team estimated that OilCo would have enough cash flow to operate through the end of the next fiscal year
Next, the B. Riley team estimated the net proceeds from a sale of the properties in the third of fourth quarter of the next fiscal year. They discounted future oil reserve cash flows by 20% and risked Proved Developed Producing (PDP), Proved Developed non-Producing (PDNP) and Proved Undeveloped (PUD) values by 15%, 50% and 100%, respectively. This is a conservative valuation by historical standards and there were few trans-actions completed in this challenging time frame. Based on these analyses, the team estimated the bank recovery under a Plan B scenario to be approximately $133.2 million.
Overall, the B. Riley team estimated the bank recovery resulting from the reorganization plan draft to be about $165.2 million, with a downside case of $145.9 million. In the Plan B scenario, the bank recovery would be $143.2 million. Since the recovery estimates were so close, the team advised that each bank decide which approach to take based on their specific appetites for risk.
Ultimately, OilCo selected the first option, avoiding the Plan B scenario. Thanks to the B. Riley team’s consulting, the company was able to successfully negotiate a deal with a potential buyer and has avoided “Chapter 22” to this day.
The B. Riley team considers @RISK an indispensable tool for their work in the oil and gas industry, where multiple variables can interact and shift in unpredictable ways. “There were a lot of moving parts in this transaction, and each stakeholder had a unique viewpoint.” said Daitchman. “@RISK helped us navigate all of the various outcomes and provide clear solutions that all stakeholders could use to negotiate. @RISK was invaluable in this case; we couldn’t have reached a successful outcome without it.”
Meeting schedule milestones in a timely manner is paramount in any industry, but particularly important in those working with the Department of Defense, where deliverables can be a matter of national safety, and, where program delays can become very expensive due to large, highly compensated staffs. A major aerospace company was contracted by the Navy to build defensive missiles designed to ward off missile attacks from hostile nations such as North Korea, and was required to provide accurate estimations of their timeline and deliverables, including conceptual and detail designs, manufacturing, test, integration, and delivery.
To do this, the company previously used an out-dated schedule risk analysis method, and was informed by the Navy that they had to adopt a more quantifiably rigorous approach. To solve this dilemma, the firm brought on Jim Aksel from ProjectPMO of Anaheim, California, as a consultant to help adopt a more accurate method for evaluating their timeline.
“They didn’t have a Monte Carlo simulation tool,” says Aksel. “They were adding duration margins to many tasks without knowing if that amount of time was correct or to the correct tasks. It was simply a guesstimation.”
When Aksel first came on to the project, he and his team took a hard look at the company’s existing schedule for the missile development. “After examining the schedule, cutting up and taking out the ‘junk,’ we realized they didn’t have a credible critical path.” A critical path is the sequence of activities which result in the longest path through the project; this sequence is the key factor in determining how long the project will take. A delay to any task on the critical path extends the entire project duration. Aksel was able to separate the tasks in the timeline, remove unnecessary logic and date constraints, and determine a critical path, thus enabling the next step in the schedule risk analysis process—Monte Carlo simulation.
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. In the case of a schedule risk analysis, the variable containing uncertainty is the unfinished tasks’ remaining durations. Monte Carlo Simulation then calculates results over and over, each time using a different set of random values from the probability functions. Aksel chose Palisade’s @RISK software to conduct these calculations for the aerospace firm. Before these calculations could be done, Aksel needed inputs for the schedule model –specifically, a range of estimated durations of each task. To get the inputs, Aksel interviewed the engineers and other key players involved in the project for their best guesses on each task’s duration.
“We needed to give them some information about what exactly we needed,” says Aksel. To help them understand, Aksel used the analogy of your morning commute. “I want to know, on a typical day with typical traffic, what’s the earliest you’d get to work—not how long the drive would take on a Sunday morning at 4 AM,” he explains. Staying with the simile, he also wanted the latest time to work on a typical day – not the day when there is a truck turned over blocking the freeway. For the missile program, Aksel essentially wanted a range of what the typical duration would be, without the rare extremes. The @RISK software has the capability to process both types of distributions as part of its setup. Each task then contains three estimates of remaining duration: optimistic, most likely, and pessimistic. The values need not be symmetrical.
The aerospace engineers provided their educated estimations on each task in the critical path, and had quantifiable backup data (such as prior history of similar tasks or incremental bottoms up estimates). Aksel plugged these inputs into @RISK’s schedule model via Microsoft Project. Tasks not laying directly on a critical path were “banded” using percentages such as Most Likely Duration ±20% (again, the window does not have to be symmetrical). After running the simulation, they got a distribution of durations for tasks identified by @RISK, including estimated windows of dates for contractual program milestones such as System Requirements Review, Preliminary and Critical Design Reviews, and First Flight. From this information, the team was able to estimate (with numeric certainty) the probability of contractual milestones occurring no later than specific dates. These dates can then be used as a basis for performance incentive payments. This changed the timeline for deliverables.
“Previously, the engineering team had just added 22 days of margin to a task to ensure enough buffer time to complete things,” reports Aksel. “But after running the Monte Carlo simulation, we found that 95% of the time, you would only need 10-14 days of margin time for that milestone.”
By cutting out the ‘junk’ tasks in the project schedule, and using @RISK to more accurately determine the durations between milestones, Aksel and his team were able to shorten some estimates by six to eight weeks!
Aksel says that the @RISK model outputs can sometimes surprise the experts. However, since the experts provided the data for the inputs, “you don’t squabble about the output” says Aksel. Instead “you get to process it.” He adds that one common question asked is, ‘Have we performed a sufficient number of iterations?’ Given the output of the model, and the desired level of precision, this becomes a simple mathematical calculation. “If there are not enough iterations, you can run the model again, or, determine the precision that exists in the current model and decide if it is sufficient,” he says.
For this particular project, Aksel explains, the model matched fairly closely to the engineering team’s original estimates with only a few exceptions. “It helped solidify that we had true belief in our logic,” says Aksel. “That’s the best success we could ask for.” The program manager then has the unenviable task of making sure the team has the resources necessary to perform the tasks as scheduled without resource overloading—a.k.a., overburdening certain team members or groups. Clearing and avoiding resource overloads is a necessity in performing a credible analysis.
Aksel goes on to describe his favorite feature of @RISK: formulas, data, and models are all accessible and viewable in one place. “It lets me see what’s going on where,” he says. “I’m able to see everything in one place, so there’s less mystery. I showed the team the inputs and made sure there was consensus from everyone on all the inputs. It is important to include all stake holders in this process. When two stake holders, with different interests, can intelligently discuss the probability distributions, and the triple point estimates in play, then you know people are going to take the outputs of @RISK seriously.”
In conclusion, Aksel falls back on an axiom presented in his college statistics textbook: “You need to use statistics like a street light – for illumination – not like a drunkard who uses the streetlight for support.”
Dr. Etti Baranoff, an Associate Professor of Insurance and Finance at Virginia Commonwealth University in Richmond uses @RISK in her business school class, ‘Managing Financial Risk’ (taught every semester). In this class, students learn how to apply Value at Risk (VaR) analysis to understanding the measures of risks and applying tools for risk management. Dr. Baranoff’s students use @RISK to analyze which accounting data inputs (from both the balance sheet and income statements) can be most damaging to the net income and net worth of a selected company.1 The main analyses are to determine the inputs contributing the most to the VaR of the net income and net worth, the stress analysis and sensitivity analysis. Historical data is used for @RISK to determine the best fitted statistical distribution for each input, and @RISK’s Monte Carlo simulation functionality is used.
While uncovering these quantitative results for each case study of the selected company, the students try to match these “risks” with risks they map using the 10Ks reports of the company. Finding the most damaging risks (proxies by accounting data) and applying them to the risk map provides an overall enterprise risk management view of the cases under study.
Since 2009, many cases have been developed by the students, who major in financial technology, finance, risk management and insurance, and actuarial science. Featured here are two cases from Fall 2014. At the end of the semester, the class creates a large table with inputs from each of the cases studied in the class. Table 2 is used to compare the results of the companies and evaluate the risks. The class acts as a risk committee providing analytical insights and potential solutions.
For Amazon the group used @RISK to assign a distribution to each input variable collected.2 The data used was annual and quarterly data. Here we feature the quarterly results for the net worth analysis.
Each group is asked to create the statistical distributions of the historical data of each input with and without correlations (as shown in Figure 3). The simulations are run with and without the correlations. The runs are then compared. Dr. Baranoff explains that “without correlation, these results are not appropriate since the size is the most influential. By correlating the inputs, the size affect is mitigated.3 I have them do this first to show the size influence and the importance of the correlation.”
For Amazon, we show the results for the net worth using quarterly data with correlation among the inputs we see the VaR for the net worth in Figure 1.
Figure 1: Value at Risk (VaR) of Amazon net-worth with correlation among the inputs
The quarterly data showed a probability of getting negative net worth at 5% value at risk, with ‘Property, plant, and equipment,’ and ‘Cash and short-term investments’ as the highest influencers of net worth. “So as far as the net worth goes,” says Dr. Baranoff, “Amazon is a strong company.” She also adds, “Interestingly the statistical distributions used for these inputs are Exponential distribution for net ‘Property, plant, and equipment,’ and ExtraValue distribution for ‘Cash and short term investments.’” These are shown in the following two graphs.
Figure 2: Amazon: Examples of statistical distributions for inputs
Table 1: Correlation among the applicable inputs for Amazon’s net-worth
Figure 3: Sensitivity Analysis for Amazon net-worth with correlation among the inputs
Verifying the VaR results, the sensitivity analysis shows the relationship of the contribution of each of the inputs to the net worth. Again, as expected, ‘Property, plant, and equipment’ have the steepest slope. (As base value changes, ‘Property, plant, and equipment’ will have the biggest impact on net worth.)
Figure 4: Stress Test for Amazon net-worth with correlation among the inputs
For the stress test4 it appears again that the ‘Property, plant, and equipment’ can stress Amazon’s net worth at the 5% VaR level.
While it is not shown here, the project also includes an examination of the inputs impacting the net income.
When the project begins, the students create a qualitative Risk Map for the company. Figure 5 is the Risk Map for Amazon. This is done independently from the @RISK analysis. The students study the company in depth using the 10K including all the risk elements faced by the company. They create a risk map based on qualitative data of ranking the risks by frequency and severity of the potential losses from the risk exposures. After they complete the @RISK analysis, they compare the results for the net worth and net income with the qualitative Risk Map’s inputs.
Figure 5: Risk Map for Amazon – Qualitative Analysis based on 10K Report
The @RISK analysis revealed that ‘Property, plant and equipment’ have the highest possibility to destroy the net worth of Amazon. In terms of the qualitative risk map, the connection would be to the risk of ‘Supply Chain Interruption.’ Any problems with plants’ equipment will lead to supply chain risk. Another connecting input is ‘Goodwill’ as a proxy for ‘Reputational Risk’ in the Risk Map. While it is high severity and high frequency by the students’ qualitative analysis, it is shown to have medium impact on the net worth of Amazon as per Figure 1 for the VaR analysis. Similarly, ’Goodwill’ has impact on the stress analysis in Figure 4, but, not as high as implied from the Risk Map.
For this short article, Dr. Baranoff has not discussed all the analyses done with @RISK and all the relationship to risks (inputs) in the Risk Map. Dr. Baranoff says they were able to draw conclusions about how Amazon should plan for the future: “In order to have high sales revenue, Amazon will need to maintain an excellent reputation and stay at competitive prices to avoid reputational risk and decline in market share,” she says. “Also, to avoid weather disruption risk and supply chain interruption risk, Amazon will need to diversify the locations of its properties and keep the warehouses spread across the country.”
Dr. Etti Baranoff
Dr. Baranoff’s students also analyzed the risk factors of FedEx, with the same objectives as for Amazon. The students gathered financial data from S&P Capital IQ, as well as from the FedEx investor relations website. With the information in hand, the students used @RISK for distribution fitting, stress analysis, and sensitivity analysis.
When analyzing FedEx’s risk factors, the students used the 10K of FedEx and then related them to the quantitative analysis using @RISK. A number of key areas came up, including Market risk, Reputational risk, Information Technology risk, Commodity risk, Projection risk, Competition risk, Acquisition risk, and Regulatory risk. The students created a risk map of all the factors, identifying the severity and frequency of each as shown in Table 6.
Figure 6: Risk Map for FedEx – Qualitative Analysis based on 10K Report
The distribution fitting for most of the inputs on the FedEx income statement came up as a uniform distribution. This made sense to the students based on data from the last ten years, as FedEx has been a mature company strategically placing itself to confront changing market conditions. Net income was negative at the 45% VaR for the uncorrelated data against a 40% VaR for the correlated data. ‘Revenue’ and ‘Cost of goods sold’ are the two major contributors on net income.
Figure 7: Value at Risk (VaR) for FedEx net-income with correlation among the inputs
The sensitivity analysis also confirms that ‘Revenue’ and ‘Cost of goods sold’ have the most influence. They are very large amounts compared to the other inputs on the income statement.
Figure 8: Sensitivity Analysis for FedEx net-income with correlation among the inputs
For the net worth, once again the most common distribution was the uniform distribution though there were a couple of normal distributions. Pensions had a lognormal distribution which is one of the most common distributions used by actuaries. Running the simulation with and without correlation had ‘Property Plant and Equipment’ (PPE) as the most influential input, but when running the simulation on the correlated balance sheet things evened out to a point where almost all the inputs had equal effects on net worth.
Figure 9: Value at Risk (VaR) for FedEx net-worth with correlation and without correlation among the inputs - Figures on left with correlation. Figures on right without correlation.
FedEx Corporation has 8 operating companies (FedEx Express, FedEx Ground, FedEx Freight, FedEx Office, FedEx Custom Critical, FedEx Trade Networks, FedEx Supply Chain, and FedEx Services). Since FedEx started with FedEx Express and acquired the rest of the operating companies for the segments, the data for FedEx Custom Critical, FedEx Office, FedEx Trade Networks, FedEx Supply Chain, and FedEx Services are reported in FedEx Express. The segments used by the students are therefore made up of FedEx Express, FedEx Ground and FedEx Freight. With that said, FedEx Express was the highest earner for the segments. FedEx Express had the lowest profit margin followed by freight and then ground. Where freight and ground only operate vehicles, Express operates aircraft as well, so it is logical that their expenses are higher as aircraft operations are much more expensive than vehicles. For segments without correlation, operating profit was negative all the way to the 25% VaR, but segments with correlation had positive operating profit at the 5% VaR. Also, the ranking of the inputs by effects for the segments were different for the correlated and uncorrelated inputs (see Figure 10).
Figure 10: Value at Risk (VaR) for FedEx Operating Profits with correlation
For the stress test analysis of the inputs impacting the output at the 5% VAR level, there wasn’t much of a difference between the correlated and uncorrelated segments even though the deviations from the baseline was more pronounced for the correlated than the uncorrelated segments, as can be seen in Figure 11.
Figure 11: Stress Test for FedEx Operation Profits with correlation among the segments' inputs
The students concluded, “FedEx has strategically diversified itself to compete effectively in the global market place...Property Plant and Equipment was a big influence on net worth, but the company is constantly evaluating and adjusting this factor, so that there are no shortages or excesses.”
For Dr. Baranoff, @RISK’s ease of use is a major reason for making it the tool of choice for her classroom. Additionally, she lists its ability to provide credible statistical distributions as another major plus. “@RISK also allows me to show students the differences of selecting different statistical distributions, and the importance of correlation among some of the inputs,” she says. “Additionally, it allows us to combine the results for VaR analysis, stress analysis, and sensitivity analysis to discover what inputs can be destructive to a company’s net income and net worth. And, at the end of the day, it gives us good viewpoints to compare the results among the cases. The stories and the results lead to debate, and provide lots of fun in the classroom as the whole class become a risk committee.”
To conduct the comparison and give the tools to the risk committee to debate the risks and ways to mitigate them, the class creates Table 2.7 Table 2 is the foundation for the risk committee work. Each group is asked to provide a comparative evaluation among the cases under study (usually, about 5 cases each semester) as the second part of its case study report. This project concludes the semester.
Table 2: Comparing all case studies for the Fall 2014 semester - Managing Financial Risk Course
1 Students first gathered financial statements from financial information provider S&P Capital IQ. 2 Each group attempts to have as many historical data points as possible to generate the statistical distributions. If quarterly data is available, it is used. The minimum number of observations cannot be less than 9 data points. 3 While “Accounts payable” are large in size, the input is no longer the most important variable under the correlation. As shown in Figure 1, it went to the bottom of the tornado. 4 The Stress test that is captured in the graphic takes the individual distributions that are fitted based on historical data, and stresses each inputs distribution for values that are within a specified range. In this example the range tested is the bottom 5% of the variables distribution. The goal is to measure each variable’s individual impact to an output variable when stressed. The box plots observe each stressed variable’s impact to the net worth (Assets-Liabilities) of Amazon. This test shows which variables distribution’s tails have the biggest ability to negatively or positive affect the company’s net worth at the tail of 5% value at risk net worth. Each variable is stressed in isolation, leaving the rest of the variables distributions untouched when determining the net worth values. When Assets are tested at their bottom 5% of their range, the net worth of the company is found to decrease as the distribution will be focusing on the bottom 5% of the distribution’s range resulting in less assets on the balance sheet. Likewise, when testing the bottom 5% of the liabilities, the distribution is focusing on small values for the liabilities, and due to the decrease in liabilities, net worth will increase when compared to the baseline run. 5 Each group creates its own version of the risk map as one template is not required. 6 See footnote 4. 7 We acknowledge some imperfections in Table 2, but it serves its purpose as a stimulating starting point for dialogue. 8 The students deserve recognition for their excellent work on the case studies presented in the matrix below. They are: Agack, Adega L., Alhashim, Hashim M., Baxter, Brandon G. Coplan, Thomas P., Couts, Claybourne A., Gabbrielli, Jason A., Ismailova, Railya K., Liu, Jie Chieh, Moumouni, As-Sabour A., Sarquah, Samuel, Togger, Joshua R.
ENGCOMP is a Saskatchewan-based structural, mechanical and cost engineering consulting firm. Catering to Canada's commercial and heavy industrial market, it provides engineering services to the potash, uranium, oil and gas, pulp and paper, chemical processing, and food processing industries in Saskatchewan and Alberta.
With structural engineering as its core business, ENGCOMP also specialises in risk analysis, cost estimation, planning and computer task automation.
ENGCOMP was contracted to assist the Canadian Department of National Defence (DND) to define the budget for the fourth phase of construction of its ongoing Fleet Maintenance Facility Cape Breton (FMF CB) project located at the Canadian Forces Base Esquimalt, Victoria, BC. Using Palisade’s risk analysis software, @RISK, ENGCOMP conducted Monte Carlo simulations to quantify the uncertainty in defining the budget and schedule for this project.
The FMF CB project has been ongoing for more than 10 years. DND was looking to consolidate and upgrade the FMF, which includes smaller facilities, spread all over Base Esquimalt’s Dockyard area, into one large facility. To do this, DND needed to evaluate its remaining budget and existing schedule to complete the fourth phase of construction.
Whilst risk assessments had been conducted on the project in the past, a true risk analysis using Monte Carlo simulation had yet to be completed. Monte Carlo simulation, a quantitative statistical modelling tool, is important to this project as it can help reduce budget uncertainty and greatly increase the likelihood of achieving project success.
Jason Mewis, President, ENGCOMP, says, “Risk analysis is crucial to the cost and schedule management of any project, and must include a scientific approach to contingency and risk reserve estimation. As a concept, Monte Carlo simulation has been around for a long time, but is not widely used, and where it is used it is primarily applied to just capital cost estimation. A Monte Carlo-based simulation tool such as @RISK can help reduce uncertainty, greatly increasing the chances of project success.”
ENGCOMP developed a system that breaks down the Monte Carlo simulation into two – a simulation for Contingency Analysis and one for Project Risk Analysis.
Contingency is a very important aspect of budgeting and needs to be accounted for properly to ensure project success. It is the amount of money that needs to be added to a project budget to account for all the expected construction costs that haven’t been itemised at the time of budgeting.
As the first component of its Monte Carlo simulation-based risk analysis, ENGCOMP needed to determine the amount of contingency that would be required to be applied to the project budget, with a reasonable level of confidence, that the final approved budget for the FMF CB project would not be exceeded. Using @RISK, ENGCOMP aimed to quantify the potential variability of factors such as labour rates, material and equipment costs and productivity, to ascertain the amount of contingency that should be allocated to the overall project budget.
ENGCOMP assessed the contribution of the various work packages in the cost estimate to determine the total contingency required for the project. A work package represents a collection of work actions necessary to create a specific result. It is typically defined by statements of activity description, activity resources of skill and expertise, estimates of activity duration, activity schedule and activity risks.
ENGCOMP used @RISK to calculate the total project cost, both with and without estimated variability on the work packages. The difference between the two totals yielded the contingency for the project.
The results of the @RISK Monte Carlo simulation for the contingency analysis showed that the bulk of the budget uncertainty was due to market volatility and unknown site conditions. Existing FMF functions are located in facilities that were used for industrial functions that historically were relatively unregulated. Site investigations indicated potential contamination below these old structures. However, the requirement to maintain ongoing operations precluded the option of removing the buildings to conduct the comprehensive testing necessary to establish definitively the nature and extent of contamination and the related costs for removal and disposal. Therefore, the DND needed to assign significant contingency budget towards the demolition and decontamination activity.
The aim of the Project Risk Analysis was to take the Contingency Analysis to the next level, quantifying the effects of all reasonable risks and uncertainties on the project.
This component of the @RISK Monte Carlo simulation accounted for those items that are not required to construct the project, but should they occur, then DND would be expected to pay for them under the cost of the project. These factors included market conditions, environmental issues, internal operational issues and organisational changes that could collectively impact on the successful completion of the FMF CB project. For instance, the @RISK simulation aimed to quantify the uncertainty posed by unforeseen developments such as weather conditions preventing construction, labour strikes, delays in environmental approval, labour shortages, currency fluctuations, and safety, to name a few. Such occurrences can affect both the project’s cost and schedule.
@RISK enabled ENGCOMP to estimate the impact of the variability and uncertainties pertaining to risks, costs and scheduling. This assessment enabled them to estimate the project risk budget as well as the Risk Reserve and Schedule Contingency.
A key finding of the Project Risk Analysis was that, taking into account all the risk and uncertainties on the project, there is an 85 percent certainty that the FMF CB project will be completed by January 2014.
Canada’s DND operates infrastructure projects under highly regulated and controlled regime, as one might expect given the nature of the organisation. This means that securing project funding approval in a timely fashion represented a huge challenge for DND. For the FMF CB consolidation project, this challenge was further compounded as there were numerous other organisational and technical problems. Therefore, establishing and presenting the budget in a manner that would confidently demonstrate its successful completion was imperative.
Mewis explains, “We were able to help the DND define the budget as well as give them the tools to defend it. Based on our quantitative risk analysis, DND was able to clearly justify to the Federal Government’s Treasury Board why it should be allowed to get the capital appropriation for the project despite the level of uncertainty. This may not have been possible without the detailed and comprehensive analysis enabled by @RISK.”
The FMF CB project has been authorised and is in progress. In addition, due to the success of the risk analysis undertaken by ENGCOMP, DND is talking with the company about possibly preparing a policy on performing this level of detailed Monte Carlo simulation-based analysis for all future DND projects.
Distributions used Predominantly, ENGCOMP used the Trigen function, which enabled them to account for the inherent error relating to the subjective estimation of uncertainty. They used a structured calibration training process for all the analyses so that the ‘Subject Matter Experts’ (SME’s) could be as “good” as possible at estimating uncertainty. Also, as sometimes groups cannot be fully calibrated, the process highlighted to what degree the team was calibrated, giving ENGCOMP the ability to adjust the estimated values to reflect the appropriate level of confidence.
There were cases where ENGCOMP used a Pert distribution if there was a lot of skew towards a particular parameter in the estimated distribution. This helped smooth out the Triangular distribution and put less emphasis on the tails in the analysis.
This graph is used to clearly distinguish between the varying levels of cost uncertainty encountered on the project.
The blue line represents the base total cost estimate. The red S-curve is derived from the Contingency Analysis process and represents the variability in the cost estimate for the project. The difference between the blue line and the red S-curve is the ‘contingency’ cost that needs to be applied to the total project cost.
The green S-curve represents the results of the Project Risk Analysis (PRA), which adds the impact of schedule uncertainty and outside risk factors to the total project cost with contingency. At any given level of confidence, the difference between the two curves is the Risk Reserve budget.
This graph represents the population of data that was generated during the analysis and gives the general shape of the data distribution. It depicts the total project cost including the contingency budget.
This graph represents the sensitivity analysis done within each simulation. It displays the the cost drivers that have the most impact on the bottom line and how they correlate to the total cost. The longer the bar, the larger the impact it has on the bottom line if it varies. This graph shows that the decontanmination activity induces the most uncertainty in the estimation of the total project cost.