How often are we advised that we should โidentify and manage risk against objectivesโ? Sounds easy, but how many of us are doing this effectively, especially at the strategic level?
Understanding your objectives
At the tactical level, objectives are often defined as deliverables. For example, release a product on time, increase new sales by x%, cut costs by ยฃXm. But as you move further up the organization, objectives are often composite or intangible: deliver a world class transport system; improve safety; increase customer satisfaction. While it is relatively straightforward to identify risks to deliverables, it is often more difficult at the higher level. Therefore, we need a way to make these higher level objectives more tangible.
Using Key Performance Indicators
A major step is to identify Key Performance Indicators (KPIs) for each objective, which can be measured at regular intervals to track progress and achievement.
Generally these KPIs are already in place within the organization and are linked to objectives. For example, improving safety will be the responsibility of the Health and Safety Executive, who will have KPI targets for increasing public, employee and contractor safety. They will be collecting and analysing hazard information and introducing new measures to reduce incidents. The customer service department will have goals to improve the quality of response, reduce waiting times, and provide better methods of communication. They will have commissioned surveys and gathered statistics on customer feedback to measure effectiveness of the various initiatives.
However, too often, analysis of these performance measures is too late to influence outcomes. For example, efforts made to improve customer response and communications might be negated by the failure of a new IT system. Therefore, we need to find a more proactive approach to ensure these KPIs (and the resulting objectives) are achieved, by proactively identifying risks against them and managing those risks.
Managing risks against Key Performance Indicators
You might start by doing a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. However, you then need to follow this through by recording the risks (threats and opportunities) in a risk register, where they can be tracked and ownership clearly identified. Even more important, you need to initiate response actions, to ensure you do something about them i.e. mitigate the threats and exploit the opportunities.Another technique for ensuring KPIs are met is to identify and manage systemic risk, drawn from across the organization. Smaller repetitive risks can easily combine to create a major impact, but often go unnoticed because the information is not recognized or readily accessible at a higher level. For example, staff turnover or inadequate training may be an underlying problem; in which case, customer services may have to work together with the Human Resources team to find a solution. The root cause of risk may lie within (lack of staff training) and/or outside the organization (failure of a major contractor). In either case, proactive action must be taken to address the risks.
However, itโs never possible to manage all identified risks, so you will need to prioritize and focus on the most important ones. You may need to do some form of cost benefit analysis to make sure you get a return on the investment you spend on managing the risks.The end result
During the early stage of setting objectives, the discipline of establishing KPIs, identifying risks and agreeing response actions are a major part of the iterative process of ensuring the objectives are realistic and achievable. Once progress is underway, ongoing management of existing and emergent risk is essential to stay on track.
With the changing business environment brought on by events such as the global financial crisis, gone are the days of focusing only on operational and tactical risk management. Enterprise Risk Management (ERM), a framework for a business to assess its overall exposure to risk (both threats and opportunities), and hence its ability to make timely and well informed decisions, is now the norm.
Ratings agencies, such as Standard & Poors, are reinforcing this shift towards ERM by rating the effectiveness of a companyโs ERM strategy as part of their overall credit assessment. This means that, aside from being best practice, not having an efficient ERM strategy in place will have a detrimental effect on a companyโs credit rating.
Not only do large companies need to respond to this new focus, but also the public sector needs to demonstrate efficiency going forward, by ensuring ERM is embedded not only vertically but also horizontally across their organizations. This article provides help, in the form of five basic steps to implementing a simple and effective ERM solution.
This is first of a series of articles on ERM. Future articles will expand on each of the steps in this articles.
ERM requires the whole organization to identify, communicate and proactively manage risk, regardless of position or perspective. Everyone needs to follow a common approach, which includes a consistent policy and process, a single repository for their risks and a common reporting format. However, it is also important to retain existing working practices based on localized risk management perspectives as these reflect the focus of operational risk management.
The corporate risk register will look different from the operational risk register, with a more strategic emphasis on risks to business strategy, reputation and so on, rather than more tactical product, contract and project focused risks. The health and safety manager will identify different kinds of risks from the finance manager, while asset risk management and business continuity are disciplines in their own right. ERM brings together risk registers from different disciplines, allowing visibility, communication and central reporting, while maintaining distributed responsibility.
In addition to the usual vertical risk registers, such as corporate, business units, departments, programs and projects, the enterprise also needs horizontal, or functional risk registers. These registers allow function and business managers, who are responsible for identifying risks to their own objectives, to identify risks arising from other areas of the organization.
The enterprise risk structure should match the organizationโs structure: the hierarchy represents vertical (executive) as well as horizontal (functional and business) aspects of the organization.
This challenges the conventional assumption that risks can be rolled up automatically, by placing horizontal structures side by side with vertical executive structures. Risks should be aggregated using a combination of vertical structure and horizontal intelligence. This is a key factor in establishing ERM.
Once an appropriate enterprise risk structure is established, assigning responsibility and ownership should be straightforward. Selected nodes in the structure will have specified objectives; each will have an associated manager (executive, functional or business), who will be responsible for achieving those objectives and managing the associated risks. Each node containing a set of risks, along with its owner and leader, is a Risk Management Cluster.*
Vertical managers take executive responsibility not only for their cluster risk register, but also overall leadership responsibility for the Risk Management Clusters below. Responsibility takes two forms: ownership at the higher level and leadership at the lower level. For example, a program manager will manage his program risks, but also have responsibility for overseeing risk within each of the programโs projects.
Budgetary authority (setting and using Management Reserve), approval of risk response actions, communication of risk appetite, management reporting and risk performance measures are defined as part of the Owner and Leader roles as illustrated in Figure 3. This structure is also used to escalate and delegate risks.
Horizontal managers take responsibility for their own functional or business Risk Management Clusters, but also for gathering risks from other areas of the Enterprise Risk Structure related to their discipline. For example, the HR functional manager will be responsible for identifying common skills shortfall risks to bring them under central management. Similarly, the business continuity manager will identify all local risks relating to use of a test facility and manage them under one site management plan. To assist in this, we use an enterprise risk map โ see Step 3. *Risk Management Clustersยฎ are unique to the Predict! risk management software
Risk budgeting and common sense dictate that risks should reside at their local point of impact, because this is where attention is naturally focused. However, the risk cause, mitigation or exploitation strategy may come from elsewhere in the organization and often common causes and actions can be identified. In this case, we take a systemic approach, where risks are managed more efficiently when brought together at a higher level. To achieve this, we need to be able to map risks to different parts of the risk management structure.
Global categories
Functional and business managers should use these global categories to map risks to common themes, such as strategic or business objectives, functional areas and so on. These categories then provide ways to search and filter on these themes and to bring common risks together under a parent risk.
Risk relationships
For example, if skills shortage risks are associated with HR, the HR manager can easily call up a register of all the HR risks, regardless of project, contract, asset, etc. across the organization and manage them collectively.
Similarly, the impact of a supplier failing on any one contract may be manageable. But across many contracts could be a major business risk. In which case, the supply chain function needs to bring the risks against this supplier together and to manage the problem centrally.
Each Risk Management Cluster will include both global and local categories in a Predict! Group, so that each area of the organization needs only to review relevant information.
Scoring systems are also applied by Risk Management Cluster, with locally meaningful High, Medium and Low thresholds which map automatically when rolled up. For example, a high impact of ยฃ150k at project or contract level will appear as low at corporate level. Whereas a ยฃ5m risk at a project or contract level may appear as High at the corporate level.
Typically, financial and reputation impacts will be common to all clusters, whereas local impacts, such as project schedule, will not be visible higher up.
The most important aspect of risk management is carrying out appropriate actions to manage the risks. However, you cannot manage every identified risk, so you need to prioritize and make decisions on where to focus management attention and resources. The decision making process is underpinned by establishing risk appetite against objectives and setting a baseline, both of which should be recorded against each Risk Management Clusterยฎ.
Enterprise-wide reporting allows senior managers to review risk exposure and trends across the organization. This is best achieved through metrics reports, such as the risk histogram. For example, you might want to review the risk to key business objectives by cluster. Or how exposed different contracts and projects are to various suppliers.
Furthermore, there is a need to use a common set of reports across the organization, to avoid time wasted interpreting unfamiliar formats. Such common reports ensure the risk is communicated and well understood by all elements of the organization, and hence provide timely information on the current risk position and trends, initially top-down, then drilling down to the root cause.
At all levels of an organization, changing the emphasis from โrisk managementโ to โmanaging risksโ is a challenge; however, across the enterprise it is particularly difficult. It requires people to look ahead and take action to avert (or exploit) risk to the benefit of the organization. It also requires the organization to encourage and reward this change in emphasis!
Unfortunately, problem management (fire-fighting) deals with todayโs problems at the expense of future ones. This is generally a far more expensive process as the available remedies are limited. However, if potential problems are identified (as risks) before they arise, you have far more options available to affect a โLeft Shift: from a costly and overly long process to one better matching the original objectives set!
Most organizations have pockets of good risk management, many have a mechanism to report โtop Nโ risks vertically, but very few have started to implement horizontal, functional or business risk management. Both a bottom up and top down approach is required. An ERM initiative should allow good local practices to continue, provided they are in line with enterprise policy and process (establishing each pocket of good risk management as a Risk Management Cluster will provide continuity).
From a top-down perspective, functional and business focused risk management needs to be kick started. A risk steering group comprising functional heads and business managers is a good place to start. The benefits of such a group getting together to understand inter-discipline risk helps break down stove-piped processes. This can trigger increasingly relaxed cross-discipline discussions and focus on aligning business and personal objectives that leads to rapid progress on understanding and managing risk.
Finally, to ensure that an organizational culture shift is affected, the senior management must be engaged. This engagement is not only aimed at encouraging them to see the benefits of managing risk, but to also help the organization as a whole see that proactive management of risk (the Left Shift principle) is valued by all.
A Risk Management Masterclass for the executive board and senior managers can provide them with the tools necessary to progress an organization towards effective ERM.
ERM delivers confidence, stability, improved performance and profitability. It provides:
Over time this will:
All of the risk management skills and techniques required to implement Enterprise Risk Management can easily be learned and applied. From senior managers to risk practitioners, Masterclasses, training, coaching and process definition can be used to support rollout of ERM.
Create a practical Enterprise Risk Structure, set clear responsibilities and hold people accountable. Define a simple risk map and provide localized working practices to match perspectives on risk. Be seen to make decisions based on good risk management information.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
Monte Carlo simulation uses repeated random sampling to calculate results about physical and mathematical systems. It uses uncertainty in its inputs to generate a range of possible outcomes, which are then reported as results with a degree of mathematical confidence. The method tends to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.
The uncertainty used as inputs to Monte Carlo simulation is described using probability distributions, which define the range of values a variable might take, and the likelihood of those values occurring.
The most commonly used probability distribution used for modeling project and business uncertainty is the Triangular distribution, so-called because it is defined by three points โ a minimum, likely and maximum value. For example, the time required to dig the foundations for a new house may be a minimum of 10 days (best case), likely of 12 days and maximum of 15 days (worst case). This is called a three point estimate and is often based on historic information and experience (a building firm that has built hundreds of houses over several years will have a pretty good idea).
Monte Carlo simulation is also used to model project and business risk events. In this case, two probability distributions are required. First a Bernoulli distribution is used to model whether the risk event occurs โ resulting in either a True or False result (e.g. a coin can be used to model a risk that has a 50% chance of occurring โ heads it happens, tails it doesnโt). Second, the uncertain impact of the risk is described, typically using a Triangular distribution.
For example, while digging the foundations for your house, you may suffer from extremely bad weather, causing a delay before you can continue with your task. The probability of extremely bad weather (the event) might be 5% and your estimate of the impact (the uncertainty) might be between 2 and 8 days, but most likely 4.
Even in a small project or business model, you are likely to have hundreds of uncertain tasks and dozens of possible risk events. You need a mechanism for understanding what this means in terms of the possible outcome (success or failure). Taking a simplistic approach, you could calculate the worst case scenario, by adding the maximum values for all variables and assuming all of the risks happen. But this will produce a very pessimistic outcome, with the result that you will probably never embark on building your house. Similarly, taking the most optimistic view will give you a false picture of your ability to deliver on time and budget, resulting in substantial overrun and losses.
Monte Carlo simulation is used to create a picture of the outcomes between the optimistic and pessimistic scenarios, with the results tending to a bell shaped curve (a Normal distribution). The mathematical basis for this shape is the Central Limit Theorem, where the combination of a large number of independent random variables will converge to a Normal distribution. The only requirement is that you repeat the Monte Carlo โexperimentsโ enough times, to ensure a representative set of random samples have been used.
The Monte Carlo set of outcomes (results distribution) can be interrogated to understand the confidence of meeting specified targets. For example, if you run the Monte Carlo simulation 1000 times, you will produce 1000 potential completion dates for your house. To work out the confidence of meeting a 30th July deadline, count up the number of results that lie to the right of that date. If there are 200 such results, then you know that there is a 20% chance (200/1000) that you will overrun. Conversely, you can be 80% confident that you will finish on time or early. The benefit of Monte Carlo, is that you can generate intermediate results in addition to the overall outcome, for example, you can calculate the confidence of meeting milestones as well as the project end date.
Further results can be extracted from the Monte Carlo outputs. One very useful measure is sensitivity, which is often portrayed in a Tornado diagram. Sensitivity measures how much influence an input variable has on your outcome. For example, any risk event or uncertainty experienced while digging your foundations may have a very large influence on your project end date, whereas problems when you are painting the walls may not factor into the end date at all. The Tornado chart is particularly useful in highlighting which tasks in your project are the most likely cause of delay or overrun. This helps focus on which areas of the project to target most risk management effort.
The examples given in this article are schedule related โ how many days it will take to do something? What date you can be confident of meeting? This is because of the complex nature of schedules, that include logic links (for example, task A in a MS Project plan needs to finish before task B can start) as well as parallel critical paths. Other methods of schedule analysis e.g. PERT cannot easily handle such complex logic. Monte Carlo is invaluable for schedule analysis.
You can also use Monte Carlo simulation to model costs. For a simple set of uncertain cost variables, algorithms are available to calculate results. However, these methods will only work assuming the variables are independent โ when correlation is included, Monte Carlo is a much more generic analysis solution.
In cases of both schedule and cost risk analysis, adding risk events to your base estimating model is a further reason for using Monte Carlo simulation. The inclusion of risk events, with the binary True/False probabilistic branching is very difficult to achieve without the use of Monte Carlo simulation.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
Rising costs and intense competition continue to shape the construction industry, making the bidding process more challenging than ever. The pressure to reduce project costs is highโbid too low, and profit margins become unsustainable; bid too high, and you risk losing the contract altogether.
In such a high-stakes landscape, understanding and analyzing risk and uncertainty in the bidding process is crucial. Rising costs and intense competition continue to shape the construction industry, making the bidding process more challenging than ever. The pressure to reduce project costs is highโbid too low, and profit margins become unsustainable; bid too high, and you risk losing the contract altogether.
In such a high-stakes landscape, understanding and analyzing risk and uncertainty in the bidding process is crucial. In this blog, we explore strategies to win more bids, enhance delivery confidence, and maximize profitability.
Bids are projects in their own right, often with very tight and immovable deadlines, and they need to be managed as such. The challenge in developing the bid is that the delivery project often has a high degree of uncertainty, meaning the outcomes in terms of timelines and costs are hard to predict. In addition, clients expect that the bid timeline will be โaggressiveโ and the price very โcompetitiveโ. But unless the impact of potential risks and uncertainties is understood itโs difficult to evaluate how risky a particular bid price might be in terms of making a profit or loss. Nor will you be able to understand the confidence you can have of achieving project milestones. In other words, if you donโt understand the risks and uncertainties involved, you are walking into the bid and any negotiations blind.
The first step is for the bid team to have a coherent view of the impact of the projectโs risks and uncertainties on the deliverables. Only then can you assess the probability of achieving particular timeline targets or budget, and make informed decisions.
We cannot stress this enough. Uncertainties often have a bigger impact on the project deliverables than risks. It is tempting to think that if the bid team applies more time and effort, the estimates would improve. However, the inherent uncertainties, especially given the lack of detailed information at the bid stage, mean that this is completely unrealistic.
In complex projects, it is inadequate to take a simple approach using point or average values for the key variables (e.g. time required to dig foundations), as these single-point estimates will give a false impression of accuracy. So, ranges or three-point estimates should be used. It is not uncommon for a Monte Carlo based schedule analysis using activity durations with three-point estimates, to show that there is less than a 30% chance of meeting the date predicted using single, average durations.
When under pressure in preparing a bid, running a Monte Carlo analysis against the schedule with uncertainty alone, can indicate issues with the schedule. It can also indicate whether there is a reasonable level of probability of delivering on time or budget: even before applying any risks. This could indicate the need to take a rapid and early decision on whether to bid and save the organization the significant cost of bidding.
Usually, the quickest route to developing a risk-based bid is to access information on the risks and uncertainties from similar previous projects.
Then have informed discussions about:
Modern risk management tools can greatly assist in this process by providing an easily searchable database, with views across the portfolio that integrate key lessons to learn from previous projects and other current projects in the portfolio.
Applying a Monte Carlo approach which integrates risks and uncertainties provides a clear view of their impact on timelines and costs. This defines the extremes of the range, as well as the probability of a particular outcome. When setting budget targets this is an invaluable tool for you to answer the critical question, โat this price, what is the probability of making a profit or loss?โ
The ability to quickly adjust the model enables you to see if you can achieve an optimal balance of cost and time. This approach allows the assessment of alternative strategies before presenting the bid and can form the basis for negotiating โtomb-stoneโ risks out of a contract by getting the client to take responsibility for them and (significantly) lowering the bid price as a result.
Risk analysis allows you to prioritise the most significant risks and build mitigation actions into the bid. Ultimately, this approach allows the team to identify those bids to walk away from because the probability of making a loss is too high. Of course, you can also choose to bid aggressively on some projects that are strategically important. But by adopting the Monte Carlo approach consistently for all projects the risk exposure across the portfolio can be assessed.
Another benefit of this approach is that it provides a sound basis for entering into performance-based contracts.
For example, a build is scheduled to take 20 months but taking uncertainty into account, the graph below shows that it could take between 18 and 28 months. Ultimately, the decision comes down to whether to commit to 21 or 22 months (with 70% and 90% confidence respectively). The final choice will depend on what level of risk you are prepared to take (your organizationโs risk appetite), your capacity to manage the potential overrun if you bid aggressively, the chance you will lose the contract to a competitor if you opt for a safer deadline and so on.
Without Monte Carlo analysis, it is impossible to understand if the bonus side of the contract would be achievable, or if the penalty side leaves you over-exposed.
There are several benefits to adopting a risk-based approach to bid preparation and defence. For example:
Understanding a projectโs uncertainties and proactively managing the potential risks gives a more accurate picture of what is feasible within a specified time-frame. It adds honesty and confidence to the bid and delivery process for both client and contractor.
Adopting risk management enables the bid team to build robust proposals avoiding the dangers of over-promising and under-delivering. When bidding, risk analysis ensures that you make informed decisions on the price and time-frames, helps you select the bids to respond to, and increases the quality and profitability of the business you win.
Risk management doesnโt stop after winning bids but is a continuous process which sees the risk information passed on to the project team to be used until completion, using the same assumptions, risks and uncertainties. A single risk management software tool can support this, ensuring a seamless process between bidding and project delivery without crucial data being overlooked or lost.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
Whether youโre a seasoned risk professional or just getting to grips with risk, ISO 31000 is a great resource, now widely adopted around the world. It is blissfully concise and clear, offering a flexible way to implement common-sense risk management.
And hereโs whyโฆ
In a fast-changing world, the guide points to having an integrated view of risk, providing a platform for informed decision making.
As a leading Risk Software provider, we understand how important it is that our Risk Management and Analysis software (Predict!) embraces the ISO 31000 Standardโs Principles, Framework and Process steps. Predict! delivers this within a seamlessly integrated working environment that focuses on speed, simplicity and a great user experience that encourages engagement.
Predict! facilitates ISO 31000 Standardโs approach by:
Predict! suite fully supports organizations applying all elements of the ISO 31000 standard. Leading to great outcomes for your business.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
Most organizations aspire to practise effective Enterprise Risk Management (ERM), but very few are achieving it. Over the last decade, much effort has gone into implementing systems to comply with Sarbanes Oxley, COSO, Turnbull, Basel II, COBIT and other regulatory standards. As a result, many organizations are now stuck in a world of compliance-oriented risk management. By doing so, they are failing to take advantage of the benefits gained from the more strategic approach that ERM offers.
Putting ERM in place delivers significant benefits:
There are five steps to Enterprise Risk Management, as illustrated in figure 1.
The first thing the Chief Risk Officer needs to do when distributing responsibility for risk management, is decide on the appropriately sized โchunksโ, for managing a set of risks. These โchunksโ typically have specific objectives, for example:
Dividing and organization into these โchunksโ allows people to work locally, focusing on their specific objectives, while providing a structure for enterprise reporting and escalation. This means that the right people receive timely information about risk, enabling them to make the informed decisions which underpin any successful business.
The โchunkโ of the business, the person responsible and the associated risk management activities are the items that make up a Risk Management Clusterยฎ. The individual manager who has overall responsibility for ensuring risk is managed against agreed objectives is called the Cluster Owner.
Each cluster also has a second named person, the Cluster Leader who is responsible for approving any change to objectives, budgets and risk value thresholds. It is easier to set up clusters in some parts of the organization than others. For example, programes and projects are straightforward, as Figure 1. The five steps to ERM they generally have formal processes for managing risk already in place. However, it is less clear how to go about risk management within a functional or support area of the business. Therefore, it helps to consider what different types of cluster may exist in your organization. These types generally arise whenever there is a different approach or perspective on risk management.
Experience of ERM implementation so far indicates that there are typically four ways in which clusters are used: strategic, horizontal, vertical and third-party, as shown in table 1.
The way that each of these four different types of clusters work are explained in table 1 below. Using these different cluster types supports the rollout of ERM, by allowing each cluster to implement risk management in a form suited to their business perspective, level of maturity of the team etc. Any area of the business that already has wellestablished risk procedures can continue to operate those procedures locally within their cluster. Therefore, the ERM initiative need not impact adversely on existing good practice: the most significant additional work required is to implement roll up and aggregation across clusters, to provide an enterprise view of risk, as is explained further below.
Cluster owners are responsible for creating within the cluster all the risk management activities for their local area of the business. They must ensure risk management is relevant, effective and efficient. To achieve this, they need to ensure they establish the following:
Setting up clusters in this way facilitates timely decision making; with budgets pre-approved and the escalation process known in advance there is a minimum of red tape to get through when fast action is required. For example, after a major earthquake supply chain teams with appropriate risk management plans in place were able to quickly assess the overall impact to the supply of specific components and move to lock-up any remaining alternative supplier capacity assuring continuity of supply, as well as headaches for industry competitors! Provided the organization has clear risk management policy guidance (defined in the organizationโs risk framework documentation), it is generally straightforward to set up clusters with this information.
Having broken the organization down into discrete Risk Management Clusters, and having set these up in a way which will work within each area of the business, you are now ready to use a structured approach to combining the collective risk information in those clusters. This provides a strategic view of risk information at higher levels of the organization.
The most effective approach defines relationships that allow roll up and aggregation of risk through vertical escalation, horizontal communication, threshold reporting, and themed aggregation.
(a) Risk roll up through escalation: the Owner-Leader cluster relationship
Vertical clusters use the Owner-Leader relationship chain to escalate information. This involves the owner of one cluster being defined as the leader of the clusters below, automatically providing a chain of accountability back up through the organization, as shown in figure 3.
When a risk is escalated within Project X, it first gets assigned to Sam, because he is the owner of Project X. He may discuss it with his Cluster Leader Fred, and agree escalation to program level, where Fred automatically takes responsibility for it. If Fred is able to put in place a satisfactory response to this risk, he can delegate it back down into Project X, along with the necessary budget and resources to handle it. But if Fred cannot manage it at Program level, he will escalate it to his Program leader Jo, who automatically picks it up at divisional level. This process continues until it reaches a level where a management response can be put in place, before delegating it back down the same route it came up.
In this way, each personโs responsibility takes two forms: ownership at the higher level and leadership at the level below. For example, a program manager will manage his program risks, but also have responsibility for overseeing risk within each of the programโs projects.
(b) Managing risk through common leadership of clusters at the same level
The cluster structure in figure 3 is not only useful for escalating risks, but is also designed to assist when you identify a risk in one project that is caused by another project. For example, letโs say Contract Q is developing a technology that will be used by Project Y; late or limited delivery of that technology by Contract Q will impact on Project Y. This risk must be managed through communication and cooperation at program level, facilitated by the fact that the leader Jo, for Programs A and B, are the same.
Although the example here uses programs and projects, the same vertical cluster system works for multiple departments, business units, divisions and so on โ anything that has a vertical reporting structure would use escalation and then horizontal communication to support management of risk.
(c) Risk roll up through reporting: using thresholds
Each cluster has a set of thresholds, established through setting high/medium/low scoring criteria for risks. As you go up through vertical clusters, the thresholds for reporting increase. Therefore, a risk that appears โhighโ in Project X will only appear โhighโ at Program A above if it is significantly large. Thereby filtering out information as you move further up the organizational structure, and highlighting only the risks that require management attention.
(d) Risk aggregation: using common themes
Typically, horizontal and strategic clusters use themes (or categories) to gather risk information; for example the health and safety functional manager will look for common health and safety risks across the business and manage most of them under a small set of centrally formulated risks with associated actions. Similarly, the organizationโs strategic director will gather risks associated with business objectives. They will then try to understand common causes and appropriate responses and / or consider adjusting business objectives at strategic level.
It is the responsibility of functional and strategic managers to define the categories they are interested in and to ensure that clusters throughout the business use those categories when they are identifying risks.
For example, if the central procurement function manager wants to analyse risk by supplier, they will need each cluster to identify risks associated with a set of suppliers. Then a search across all clusters by supplier will provide an aggregate view of risk by that supplier. Similarly, the HR manager might use global categories, to identify common skills shortfall risks to bring them under central management. And the business continuity manager may identify risks relating to use of a specific test facility and then manage them under one site management plan. The strategic business manager must define the set of objectives to which risks need to be linked.
A view across clusters provides an overview of how well risk is being managed throughout the organization. A review of cluster status, for example using a Red/Amber/Green cluster flag to show each clusterโs confidence of achieving targets within allocated risk management reserves is a useful measure to provide program, departmental and organization-wide risk reporting. Reporting at this level depends on clusters being set up as discrete elements; for example, the risk budget for Project X must be recorded against the Project X cluster. The program budget at the next level up must only contain budget for program level risk, not for any of the risks being managed within the projects below. It is just as important to ensure you donโt double count across clusters, as it is to make sure you donโt miss anything out. Cluster level reports provide senior managers with the ability to see which areas of the business need most attention, allowing them to direct management attention and appropriate resources in a timely manner. These reports may also trigger new strategic level risks. For example, consider the risk of failing to win a major contract due to a new competitor entering the market. This might trigger a potential hostile takeover bid for the company; these risks would normally be recorded and managed at executive board level.
Having put in place appropriate risk policy documentation, the next step to ERM is to assign responsibility through the implementation of Risk Management Clusters. It is important for responsibility to be pre-agreed, to ensure a speedy response when escalating risks.
In order to distribute responsibility across the organization, Risk Management Clusters define entities that have business objectives associated with them. Only by identifying risk directly against these objectives will you be able to focus enterprise risk management activities towards the things that are important to your organization.
Different types of clusters are used to represent different business perspectives: the most common ones are strategic, horizontal, vertical and third-party. Although each cluster is responsible for managing its own risks effectively, you will only have successfully implemented ERM when you have effective mechanisms for extracting the significant risk information from clusters and raising it to management level where strategic decisions can be made.
Therefore, clusters are brought together to allow roll up and aggregation of risk through vertical escalation, horizontal communication, threshold reporting, and themed aggregation. There are a number of benefits to be gained by assigning responsibility for risk across the organization, using clusters:
In order to implement Enterprise Risk Management, not only must a cluster owner understand the risks that affect their area. But they also need to work with other cluster owners to identify and help manage risk across programs, divisions and so on to identify and mitigate risk through cross-departmental mitigation strategies.
Successful management of risk depends on people accepting responsibility and working together across the organization to raise the risk to the right level of the organization (to board level if required) and responding in a timely manner. This is the basis for sound decision making and successful strategic management of the business. The most appropriate way to do this is by assigning responsibility using Risk Management Clusters.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
When managing a program or portfolio of multiple projects, risks have to be managed at a level appropriate for each individual project. With the right organizational structure and software in place, you can also view, monitor and manage risks across all projects at program level.
At any one time, a large organization may have a significant number of ongoing projects, of varying types, stages and sizes, with different stakeholders, customers, suppliers and deliverables. Managing risks on these projects by treating them as a program can bring significant benefits in a number of areas:
This article gives practical examples of how program risk management can be implemented in each of these areas.
Systemic risks are risks that occur repeatedly across an organization. It is much easier to identify such risks when using a central database, rather than trying to search across multiple, inconsistent project risk registers in spreadsheets. Looking at all these risks across a program allows individual project managers to focus on running their project, with the responsibility and contingency budget for systemic risks resting with the program manager or function. Managing risks centrally this way is another way of driving down overall costs.
Identifying the โTop 10โ risks across a program is a common requirement, to enable management to focus on the most important risks. This is often achieved by asking each Project Manager to provide their top 2 risks. However, as shown in figure 1, this could result in the program manager missing the fact that the 4th risk in a project has more impact than the top 2 in another project. Not only does this mean that the wrong risks are being looked at, it can also hide a large amount of potential impact.
This is another area where a program manager can make a real difference. Imagine a scenario in a construction company where four similar builds have very different risk registers. Factors such as planning risks, site access, and weather conditions can have an effect, but what about individual risk managers? Perhaps one of them has a different approach to risk identification, or uses a different definition of uncertainty, or has different experiences affecting their view of risk. All of these things affect the contents and total value of the risk register. Identifying and acting on inconsistencies such as this can potentially drive down the overall contingency budget.
Focusing resources on program risks and not individual project risks can be instrumental in driving a business forward and securing its reputation. For example, a construction business with a number of major projects underway will typically manage resources across those projects at a program level. However, it is not so common at program level to consider the risk of schedule delay on one project having an adverse impact on another project, because resources are tied up longer than planned. Such problems can also get in the way of new projects starting, having an overall effect on a companyโs cash-flow and reputation. In this case the program risk manager could opt to put mitigation budget towards avoiding the delay or proactively finding alternative resources. This is particularly important when managing projects with late delivery penalty clauses.
Risk analysis at a program level can play a role in driving down the level of contingency required, while maintaining confidence levels for delivery on time and to budget.
For example, project risk analysis shows that to give each individual project, say, an 80% confidence of being within their risk budget, may require the sum of the individual project contingency budgets to be ยฃ10m. However, analysing at a program level and maintaining an overall 80% confidence level could require a contingency budget of ยฃ9m, potentially freeing up money for alternative initiatives. This of course means that each individual project will get a lower contingency budget and that their confidence level of being within their budget is now less than 80%. This leads to an interesting dilemma for the program risk manager. How to balance the motivational factors around the likelihood of the project managers feeling less able to deliver, given the lower confidence, which could lead to reduced performance and freeing up contingency to benefit the company as a whole? Risk management at this level is an art not a science.
The benefits are even greater if similar projects are managed together, allowing for a database of lessons learned, stored in a risk tool such as Risk Decisionsโ Predict! to be applied to each project, resulting in a further reduction in contingency.
It is clear the benefits for program risk management can be significant. To be effective however, risk still has to be managed appropriately at project level. This requires each project to have a risk scoring system relevant to its size and business significance, and risk categories (for example, lists of suppliers and contractors) appropriate to the project. But these necessary differences cause problems when trying to get a consistent view across projects. When it comes to risk scoring. at program level, youโll want each projectโs risks to be recalibrated, so that they can be reviewed against program level criteria, while leaving the project level to view risks using the original scores (for example, to avoid smaller projects having all their risks weighted as being of minimum impact). For categorization, the program manager needs to decide which categories of risk need to be reviewed across projects. For example, where one supplier is the cause of a large number of risks across different projects and program or function level action is required. This can only be achieved if all projects use a consistent set of supplier names and codes.
Many organizations manage their project risks in individual MS Excel spreadsheets, and while that works for small, isolated projects, itโs really hard to consolidate and aggregate risks held in different spreadsheets across a program or business. Using a risk management software tool such as Risk Decisionsโ Predict! allows risks to be managed at a level appropriate to the size and complexity of individual projects, whilst giving the program manager a cross project view at a level appropriate to them. This requires no additional input, or manual aggregation of disparate and inconsistent spreadsheets.
Another advantage of a risk management tool is that it records an audit trail and allows for access by multiple users concurrently. It is difficult to control who edits a spreadsheet, especially with multiple users. It is estimated that some 90% of spreadsheets have errors or inconsistencies in them, something that can be avoided by using a risk tool.
However, while the business benefits of such an approach are clear, this also requires a culture of openness, where sharing information on risks is encouraged, not frowned upon. Good communication between the project managers, the program risk manager and company management will deliver the most effective results.
Program risk management has a number of benefits, including reduced contingency budgets, the ability to focus resources on top risks for the program and not just for individual projects, identifying inconsistencies, identifying systemic risks, and, last but not least, cost savings by mitigating risks at program level. Using a risk tool such as Predict! allows you to manage all projects in a central place, incorporate lessons learned and eliminate many of the challenges MS Excel presents by allowing for concurrent use and avoiding errors in spreadsheets with built in reports and analysis tools.
Want to see how the Predict! Suite can enhance your risk management strategy and help you make smarter, data-driven decisions? Request a demo today to explore its powerful capabilities.
Data-driven decision-making is at the heart of modern business success. Whether you're forecasting sales, evaluating financial risks, conducting market research, or assessing project feasibility, having the right analytical model can make all the difference. But with so many analytical models available, how do you know which one fits your needs?
The โBig Book of Modelsโ is your go-to guide for understanding the most popular analytical models available through Lumivero solutions. In this eBook, youโll explore a range of powerful toolsโfrom ARIMA and linear regression to Monte Carlo simulation and sensitivity analysisโeach designed to help you navigate complex business challenges. With the right analytical models, you can leverage historical data to transform raw information into actionable insights.
Discover how these models work, their typical applications, and real-world examples of how they provide valuable insights. From refining your financial strategy, predicting customer demand, improving consumer insights, or optimizing project outcomes, this resource will help you make confident, data-driven decisions.
Explore a range of essential tools, including:
See real-world examples of how these analytical models drive smarter business decisions across industries. Learn how organizations use these techniques to improve forecasting accuracy, mitigate risks, enhance consumer insights, and make more informed strategies.
Gain a deeper understanding of each modelโs function and best use cases. The eBook breaks down complex methodologies into easy-to-understand concepts, making it accessible to business leaders, analysts, and decision-makers alike.
Choosing the right analytical model can be a game-changer for your business. With the โBig Book of Models,โ youโll have the knowledge and tools needed to make more confident, data-driven decisions.
Download your free copy today and start optimizing your decision-making process.
Our Breakthroughs 2025 webinar series is available on-demand, and itโs packed with practical insights for researchers ready to enhance their research process nowโnot someday.
While the series spans both qualitative and quantitative research, the qualitative track stood out for its real-world research practices that you can apply today to get deeper, more efficient insights with ethical AI use. Whether youโre running interviews, organizing codes, or wrangling massive datasets, thereโs something here for you.
Across the qualitative sessions, three big ideas kept surfacing:
The tools, techniques, and strategies from Breakthroughs 2025 arenโt just interestingโtheyโre immediately useful. Whether youโre new to qualitative research or managing a large-scale mixed methods project, there are tactical ways to do better work right now. Buy NVivo today to get started.
Hereโs how these insights translate into real actions:
Use NVivoโs autocoding features.
Dr. Marret Bischewskiโs session showed how NVivo can automatically code structured interviews and identify speakers in transcripts and media files. This saves time and sets you up to dive deeper into interpretation more efficiently.
Try this: Train NVivo to autocode by coding a portion of your data, then let the autocoding feature use your coding patterns to do the restโquickly identifying key themes by analyzing your material.
Follow a โhuman-in-the-loopโ model.
Dr. Susanne Friese emphasized the importance of using AI responsibly in qualitative research. Transparency, clear documentation, and researcher oversight are non-negotiable.
Try this: When you use AI in your research, build moments into your workflow where you pause, reflect, and decide whether the AIโs interpretation holds up.
Let AI Assist handle the basics.
Dr. Ben Meehan demonstrated how NVivo 15โs AI Assistant helps with descriptive workโgenerating initial codes and summariesโso you can focus on making meaning.
Try this: When using AI-generated codes or summaries, always note where and how AI contributed. Make it a part of your audit trail.
Donโt miss the full depth of ideas, expert tips, and practical methods shared in the Breakthroughs 2025 qualitative track. Watching on-demand gives you access to every session, demo, and takeawayโso you can learn directly from the experts.
Our Breakthroughs 2025 webinar series is now available on-demand, and the quantitative track delivers immediate, tactical value for professionals who need to make better decisionsโfast.
Whether you're managing Agile projects, modeling business risks, or needing to translate raw data into insights, these sessions were packed with powerful quantitative data analysis techniques you can use right now.
Across the quantitative sessions, three patterns stood out:
These sessions werenโt just interesting โ they were immediately useful. Whether youโre working in project management, finance, sensory analysis, marketing, or engineering, the quantitative tools and statistical techniques shared in Breakthroughs 2025 are about solving real problems right now. Buy @RISK and XLSTAT today to get started.
Hereโs how these insights translate into practical action:
Integrate it into your backlog.
PMI-authorized instructor Mohamed Khalifa made it clear: risk management in Agile isnโt an add-onโitโs embedded.
Try this: In your next sprint planning, treat risks like regular backlog items to ensure theyโre actively managed. Use risk burn-down charts and user story mapping to track progressโnot just on features, but on risk responses too.
Use Monte Carlo simulation in Excel.
Jose Orellana, Solutions Consultant at Lumivero, showed how @RISK turns static spreadsheets into dynamic decision models. From project costing to oil and gas scenarios, Monte Carlo simulations let you see the full range of possible outcomesโand plan for them.
Try this: Add probability distributions to your project cost estimates. Run simulations using @RISK to identify the most sensitive variables and reduce blind spots in your financial planning.
Refine it into real insights.
Dr. James Abdey from the London School of Economics emphasized that data isnโt valuable until itโs refined. XLSTAT can help you turn messy data into predictive insights, fast.
Try this: After running your analysis, use XLSTATโs visualization tools to reveal hidden patterns and make your findings easier to understand and act on.
Donโt just read about itโsee the methods in action. The Breakthroughs 2025 quantitative sessions are packed with expert demos, hands-on modeling, and clear use cases across industries.
Did you know nearly 70% of enterprise data goes unused? Thatโs a staggering amount of untapped potential โ insights and opportunities that could drive innovation, improve decision-making, and shape tomorrowโs breakthroughs.
So, how do you unlock this hidden value?
Discover new strategies and expert tips with our new on-demand webinar series, Breakthroughs 2025. Our recent 3-day webinar series brought together leading experts and professionals to explore cutting-edge tools, technologies, and methodologies in both qualitative and quantitative research. Now available on demand, this series offers invaluable insights to help you harness the full potential of your data, refine your decision-making processes, and drive impactful results in the year ahead.
Watch the recordings now or continue reading to learn the highlights from this event.
Jump to:
In this insightful session, expert Dr. Susanne Friese, Founder at Qeludra, explored the evolving intersection of AI technology and qualitative research ethics. Drawing on her extensive experience, she emphasized the need for researcher responsibility and transparency when integrating AI tools into qualitative analysis. The discussion covered key concerns such as the risks of over-reliance on automation, the necessity of robust data protection, and the importance of researcher reflexivity in maintaining ethical standards.
Dr. Friese reinforced the idea that AI should enhanceโnot replaceโhuman expertise, advocating for a โhuman-in-the-loopโ approach where researchers remain actively engaged in guiding and interpreting AI-generated insights. Looking ahead, she anticipates a significant shift in qualitative research methods, moving beyond traditional coding toward more interactive, AI-assisted conversational analysisโa dynamic process where researchers and AI work together to interpret data while upholding methodological integrity.
Topics covered:
In this beginner-friendly session, Dr. Marret Bischewski, Senior Software Specialist at Alfasoft, provided an in-depth introduction to coding in NVivo for qualitative research. She guided participants through key concepts, distinguishing between deductive and inductive approaches, and demonstrated various manual, automated, and structural coding techniques to effectively analyze qualitative data.
The workshop covered coding across multiple data types including text, PDFs, images, and media files, while also demonstrating how to organize codes hierarchically for better structure. Participants also learned how to review and refine their coding, generate codebooks, and utilize NVivoโs AI-powered tools for deeper insights.
One of the most impactful aspects of the session was the emphasis on NVivoโs structured yet flexible approach to qualitative coding, making it easier for researchers to analyze and manage their data efficiently. A particularly exciting demonstration showcased NVivoโs autocoding capabilities, which allow for the automatic coding of structured interviews and media filesโhelping researchers streamline their workflow and extract insights more quickly.
Topics covered:
In this hands-on session, Dr. Ben Meehan, CEO at QDATRAINING Ltd, provided a step-by-step walkthrough of how to apply Reflexive Thematic Analysis (RTA) using NVivo 15, drawing on the foundational work of Braun and Clarke. He emphasized the importance of audit trails, methodological rigor, and practical tools to enhance coding, theming, and qualitative analysis.
The workshop demonstrated how NVivo supports each stage of the RTA process, from data familiarization to final analysis and reporting. Key features such as AI Assist, memos, annotations, and framework matrices were showcased, illustrating how they save time, improve transparency, and enable researchers to focus on analysis rather than administrative tasks.
One of the central ideas from the session was the value of maintaining a well-documented audit trail, whichโwhen combined with NVivoโs built-in toolsโenhances transparency, rigor, and the credibility of qualitative research findings. One of the standout features demonstrated was NVivoโs AI Assist, which provides automated summaries and suggested descriptive codes, significantly reducing manual workload while ensuring ethical and methodologically sound data handling.
Topics covered:
โAI Assist in NVivo doesnโt replace your thinkingโit accelerates the descriptive work so you can focus on interpretation and meaning.โ โ Dr. Ben Meehan
Focusing on best practices for risk management in Agile project environments, this session, featuring PMI-authorized instructor and consultant Mohamed Khalifa, explored how risk identification and mitigation strategies fit into Agile workflows. Khalifa provided an in-depth look at Agile principles and methodologies, including Scrum and Kanban, and demonstrated how risk management strategies should be seamlessly integrated into Agile processes.
A key focus of the discussion was the proactive and iterative nature of Agile risk management, emphasizing that it should be embedded within daily project planning rather than treated as a separate activity. Using real-world examples from banking projects, Khalifa illustrated how Agile teams can effectively manage uncertainty while maintaining adaptability. He also addressed audience questions on software tools and backlog prioritization.
A major takeaway from the session was the shift from traditional risk registers to dynamic backlog adjustments, ensuring that risk responses are prioritized alongside feature development. One of the most valuable insights was the importance of continuous risk management, where teams actively assess and address risks as part of their daily workflows, rather than through periodic reviews.
Topics covered:
In this practical session, Jose Orellana, Solutions Consultant at Lumivero, demonstrated how @RISK can be applied across various industries to enhance risk analysis and decision-making. The workshop showcased the power of Monte Carlo simulation, illustrating its role in project costing, physics-based modeling, and oil and gas scenarios. Orellana demonstrated that by integrating probability distributions into Excel models, @RISK enables businesses to better anticipate uncertainty and make data-driven decisions.
This workshop provided hands-on experience with key techniques including sensitivity analysis, tornado charts, and probabilistic goal-seekingโall designed to optimize outcomes. The session also introduced advanced features like VBA automation and Six Sigma process analysis, highlighting how @RISK extends beyond basic spreadsheet modeling to support complex decision frameworks.
A key takeaway from the session was how @RISK transforms static Excel models into dynamic, probability-driven tools, helping businesses quantify risk and refine strategies with greater confidence. Orellana showed that by using Monte Carlo simulation to simplify complex decision-making, you can create a structured approach to modeling uncertainty and enable more informed business and engineering choices.
Topics covered:
Drawing from his book, โBusiness Analytics: Applied Modelling and Prediction,โ Dr. James Abdey from the London School of Economics led this insightful session on practical approaches to business analytics and data-driven decision-making using Excel. This webinar explored key statistical methods, data visualization techniques, and tools for transforming raw data into actionable insights.
The discussion emphasized the importance of statistical literacy, the challenges of teaching analytics, and real-world applications using Excel and XLSTAT. Dr. Abdey stressed the need to clearly define business objectives and highlighted the value of collecting and analyzing survey data for informed decision-making.
One of the most compelling insights was Dr. Abdeyโs analogy comparing data analysis to oil refinementโjust as crude oil must be processed to become useful, raw data needs to be refined through analytics to extract meaningful business insights. A core message from the session was that technical proficiency alone isnโt enoughโeffective business analytics also requires strong communication skills to interpret and convey insights that drive strategic decisions.
Topics covered:
Breakthroughs 2025 is your gateway to mastering the tools and strategies that drive data-powered breakthroughs. From refining your qualitative storytelling with thematic analysis to leveraging quantitative insights for elevated decision-making, these on-demand sessions are packed with actionable takeaways designed to elevate your research and results.
*According to Seagate Technology
Generative artificial intelligence (AI) tools have been taking industries (and business functions) by stormโand risk management is no exception. At Lumivero, we believe that effective risk management blends innovation with informed decision-making, and AI is becoming an integral part of that process.
To explore this developing dynamic, David Danielson, Senior Product Strategist at Lumivero, brought together a panel of risk management experts from a range of industries to discuss how AI tools are impacting the work they do now โ and what changes could be in store for the future.
Panelists included:
In this lively discussion, the panel examined how AI is reshaping risk managementโhighlighting its benefits, limitations, and potential roadblocks to adoption. They explored challenges like bias and accuracy, as well as real-world examples of companies integrating AI to strengthen their operations and decision-making.
Watch the webinar or continue reading to hear what they had to say.
David opened the discussion by asking the panel for their impressions of AIโs role in risk management today.
โI think of AI as a thought partner to the modeler, the risk manager, or the project manager,โ said Manuel Carmona. He emphasized that AI can enhance predictive accuracy by detecting patterns in historical data that traditional probabilistic models or human experts might overlook. Purpose-built AI models, he explained, can quickly analyze risk registers, project plans, and financial models to identify emerging threats. However, he cautioned that AI should be seen as a โsidekickโ rather than a leader in risk management. The panelists agreedโwhile AI enhances automation, it is not yet sophisticated enough to operate without human oversight.
โI think there's great opportunity here for AI in the risk space,โ Quinton van Eeden added, โBut let's not get stuck on a black box. Let's do the uniquely sentient human endeavor of applying our minds to the problem.โ
Glen Justis pointed out that the fundamentals of the risk management function havenโt changed simply because a powerful new tool is available. โIt's [still] all about identifying, evaluating and managing threats to business performance,โ he said.
For decades, financial risk management has relied heavily on spreadsheets, with Microsoft Excel serving as the default tool for modeling and analysis. While useful, using Excel alone often lacks the sophistication needed to capture the full complexity of financial risk. The panel agreed that AI and machine learning offer a path forward, providing finance teams with more advanced, dynamic risk analysis capabilities.
Lachlan Hughson, drawing on his experience in finance within the energy sector, argued that itโs time for finance professionals to move beyond outdated tools and embrace more powerful risk modeling techniquesโmethods that have been standard in science and engineering for years. AI-driven analytics, he noted, have the potential to add significant value by enhancing how organizations understand uncertainty and variability in financial risk.
โItโs time to get the finance function that currently uses a 30-year-old tool โ Excel โ to upgrade its contribution in a way that really can add significant dollar value.โ Lachlan noted that he was encouraged to see so many attendees indicate that they were already using @RISK and other Monte Carlo simulation-based risk analysis software. โThat gives us a much broader way to understand risk, to understand variability, to understand uncertainty.โ
Manuel Carmona was able to give an example of how he had helped a company develop a more accurate financial risk model across multiple business units by using AI. After developing an initial probabilistic risk register based on consultation with each business unit leader, Manuel and his colleagues turned to AI.
โWe fed [the risk estimates] into the AI system just as a second filter, providing lots of context,โ Manuel explained. They then compared the AIโs outputs to those provided by the business unit leaders and used variances between the two to further refine their model. The result was a financial risk model far more sophisticated and dynamic than simple spreadsheets and basic linear regressions.
Updating and rebuilding risk models as data changes is a time-consuming task, often requiring hundreds of hours from analysts and programmers. AI can streamline this process, reducing manual effort and making model adjustments more efficient.
Quinton van Eeden was enthusiastic about the potential of neural networks to create operational risk models with automated update capabilities, describing the progress he had been able to make with a South African mining company. The company was continually running into issues with oversupplies of buffer stock. Fortunately, they had invested in plenty of sensors and other tracking devices that generated large amounts of data for every stage of production.
โWe were able to pull that [data] and build a little Monte Carlo model initially,โ van Eeden explained. โBut then they decided to populate and train a neural network with the data based on the dependent variable. The mining company now has a model that constantly adjusts itself โ again, subject to review by human experts โ and is beginning to provide realistic projections they can use to make adjustments along the production line.โ
David Danielson agreed with this, explaining that he has seen many companies find similar efficiencies. โAI does provide us with an opportunity to take real-time data and feed it back into the [risk] models and enhance the outcomes faster.โ
Additionally, automating risk model adjustments would free up time for deeper analytical work, allowing professionals to focus on interpretation rather than manual recalibration. As Glen Justis explained, โWith the proper use of AI, you can have the human spend more time in analysis and interpretation of information rather than going back and recalibrating the models manually.โ
Several of the panelists indicated that many clients were experiencing hesitancy around incorporating AI into risk management processes, especially when making critical operational or financial decisions. Quinton van Eeden suggested starting by showing decision makers small wins in some low-criticality areas, such as document summarization.
Glen Justis explained that there is quite a lot of groundwork involved with getting AI to produce valuable outputs or time-saving opportunities. This has both positive and negative aspects. On the plus side, the fact that organizations need to spend time โbasically teeing up the AI engine to give you reliable information,โ as Glen put it, can reassure decision-makers that the AI is not producing results out of thin air, and that it can genuinely add value. On the negative side, mid- and senior-level risk decision-makers are incredibly busy. โThere's so many pressures to just get the basic business governance work done that people don't have time to really scratch the surface of [new technology],โ said Justis.
The panelists agreed that while newer risk management and finance professionals were learning how to incorporate AI during their education and training, senior-level professionals tended to be most resistant. Quinton van Eeden reasoned that companies would move ahead with AI adoption as competitive pressure to do so arises. โFrankly, the only way a lot of boards can really meet their fiduciary duty is by bringing a more dynamic approach [to risk management].โ
A major concern shared by both the panelists and attendees was the accuracy of AI-generated data and outputs. Large language models like ChatGPT and Google Gemini are known to produce errorsโor โhallucinationsโโraising questions about their reliability in risk management.
IBM describes hallucinations as โmisinterpretations [that] occur due to various factors, including overfitting, training data bias/inaccuracy, and high model complexity.โ To users without expertise, hallucinations can seem plausible, leading to decisions based on bad information.
โEveryone is wary or scared of this black box effect,โ said Manuel Carmona. He explained that he mitigates the risk of inaccurate data by rigorously reviewing AI outputs. โI filter through all the information that comes from the AI,โ he said. โI compare it to the information that I get from other experts and consultants about the risks.โ He then takes the AI output and cross-checks it against other LLMs. Finally, he uses Monte Carlo simulation in @RISK as a further check of how realistic the outputs are.
Quinton van Eeden again stressed the importance of well-informed human oversight when using AI tools. โThere's no algorithm in the likes of ChatGPT to check [whether] whatever it produces is true,โ he reminded attendees. โChatGPT is only usable if you know the subject very, very well, because it does make embarrassing mistakes that only a true expert or a connoisseur can detect.โ
The future of AI in risk management isnโt just promisingโit's full of possibilities. The panelists saw generative AI not as something new, but rather as a natural evolution in automation and noted that AI would lead to other types of innovations we canโt imagine yet.
โWhen electricity was [harnessed],โ Manuel explained, โThat led to the invention of the light bulb. Once people start figuring out how to use [AI] in real-world applications, we are going to see a massive change in all sorts of professional applications and life in general.โ
The panelistsโ discussion made one thing clear: AI is set to enhance risk management efforts, not replace its fundamentals. While new tools will improve efficiency and expand analytical capabilities, the core principles of risk management remain unchanged. Wrapping up the session, David Danielson reinforced this point, stating, โThe best practices [in risk management] will stay the same. But I think the methods and the efficiency of getting to those will improve over the next couple of years.โ As AI continues to evolve, its greatest impact will come from empowering professionals to make smarter, faster, and more informed decisions.
Ready to transform how you manage risk? Request a demo of @RISK today.
Glen Justis
Senior Partner at Experience on Demand, LLC
With over 30 years of experience in consulting and industry, Glen Justis has built an exceptional reputation for assisting clients at the intersection of strategy, economics, and risk management. He employs a goals-driven approach to address strategic and tactical issues, ensuring the implementation of optimal solutions tailored to client needs.
Manuel Carmona
Risk and Decision Analysis Specialist at EdyTraining Ltd
Manuel Carmona, MBA-RMP, is a specialist in risk and decision analysis with a focus on project risk management. He has extensive experience in managing risks in projects using ScheduleRiskAnalysis and is recognized for his contributions to leading risk management standards.
Quinton van Eeden
Quantitative Project Risk Analyst & Planner, TPG GRC
Quinton Van Eeden is a risk/decision analyst with more than 30 yearsโ experience in enterprise/project risk within mining and other industries. He is a lawyer by training and holds advanced level professional PMI certifications in project- and project risk management as well as a masterโs degree in information and knowledge management.
He specializes in the application of quantitative modeling and analysis techniques to elucidate the effect of uncertainty on business decisions, project estimates, operations, and strategic investment decisions so as to achieve Decision Quality.
Lachlan Hughson
Founder, 4-D Resources Advisory LLC
Lachlan Hughson, the founder of 4-D Resources Advisory LLC, has over 30 years of experience in corporate finance, M&A, and capital markets across the oil/gas, renewables, and mining/metals industries, and as an investment banker and finance director โ undertaking $30+ billion of M&A and $15+ billion of capital raising as an agent and principal. His education includes an MSc from Imperial College London and an MBA from the Kellogg School of Management. More information can be found at 4-dresourcesadvisory.com.