Making the right decision can be crucial to gaining a competitive advantage. With sophisticated optimization, you can locate the best solution in your model and also determine the precise values for all the choices that need to be made for that decision. Get ahead of the competition by modeling the best set of options to achieve your goal.

Related pages ->Good business decisions give your organization a competitive advantage. Optimal decisions make that advantage massive. The complex interplay between inputs, assumptions and constraints within the model structure make achieving optimality an impossible task without the right tool.

Sophisticated optimization techniques tell you exactly what you should do to get the best result possible in any situation for your organization.

Sophisticated optimization techniques tell you exactly what you should do to get the best result possible in any situation for your organization.

The Monte Carlo method is a computerized mathematical technique that allows people to quantitatively account for risk in forecasting and decision-making. At its core, the Monte Carlo method is a way to use random samples of parameters to explore the behavior of a complex system. A Monte Carlo simulation is used to handle an extensive range of problems in a variety of different fields to understand the impact of risk and uncertainty.

Optimization refers to a process or analysis that determines the set of decisions that maximize or minimize a key model output. These techniques can be applied to any application and is commonly used in construction & engineering, logistics & transportation, finance & banking, insurance & reinsurance, energy & utilities, manufacturing & consumer goods and other industries.

Sophisticated optimization gives the decision maker the precise values for all choices that need to be made – essentially what each controllable input should be. These optimal decisions will preserve logical real-life constraints and business rules, ensuring only feasible solutions are presented. Progress can be tracked visually, during the analysis, to assist in communicating the process and final, optimal outcome.

Inventory management

Budgeting

Resource and production scheduling

Product and marketing mix

Supply chain planning

Market entry timing and more

Project, loan, and investment portfolio maximization

Optimization operates on your model by trialing a potential solution, checking for feasibility, “learning” from the result, and then determining the best “direction” to search for superior solutions. This process continues until a global optimal solution is found.

To ensure feasibility, decision variables are constrained to the ranges available to the decision maker. Such constraints could be real-life limitations (such as availability of airlines seats or similar inventory) or business rules (such as avoiding overtime costs). Additional, complex constraints on any aspect of the model, or interplay between the decision variables and model assumptions, are tested with each trial to ensure only viable solutions are permitted.

The current best value of the output during an optimization is tracked in real time and can be displayed graphically, highlighting the rapid initial improvement in solutions, occasional spikes as breakthroughs are made, and finally as convergence to the optimal solution is achieved.

To ensure feasibility, decision variables are constrained to the ranges available to the decision maker. Such constraints could be real-life limitations (such as availability of airlines seats or similar inventory) or business rules (such as avoiding overtime costs). Additional, complex constraints on any aspect of the model, or interplay between the decision variables and model assumptions, are tested with each trial to ensure only viable solutions are permitted.

The current best value of the output during an optimization is tracked in real time and can be displayed graphically, highlighting the rapid initial improvement in solutions, occasional spikes as breakthroughs are made, and finally as convergence to the optimal solution is achieved.

With PrecisionTree, you never leave your spreadsheet, allowing you to work in a familiar environment, and get up to speed quickly.

See results in risk profile graphs, 2-way sensitivity, tornado graphs, spider graphs, policy suggestion reports, and strategy-region graphs.

Set up your decision tree in Microsoft Excel exactly as you need it with logic nodes, reference nodes, linked trees, custom utility functions, and influence diagrams.

Linear and Nonlinear Optimization

In general, optimization problems fall into one of two categories: linear and nonlinear.

There are many different optimization, or “solving,” methods, some better suited to different types of problems than others. Linear solving methods include techniques known as tabu search, linear programming, and scatter search. Nonlinear solving methods include genetic algorithms. Genetic algorithms mimic the evolutionary processes of biology by introducing random new solutions (called “mutations”) while simultaneously developing what appears to be a promising solution (an “organism”). By introducing random mutations, genetic algorithms are able to “learn” or “evolve” a better overall, or global, solution than linear methods typically can.

Linear problems are characterized by a linear mathematical relationship between the input decision variables and all constraints, as well as with the output. For example, if an input goes up by x, then the output goes down by 2x, subject to a similarly simple constraint. Scheduling the shortest route (assuming straight lines and constant speed) to stop at a given number of destinations is a common example of a linear optimization problem.

Nonlinear problems are a bit more advanced and feature at least one nonlinear relationship between decision variables and a constraint or the output. For example, if an input goes up by x, then the output goes up by some value to the power of x. Maximizing return on an investment portfolio subject to constraints on risk is an example of a common nonlinear optimization problem.

Feature | Benefit | Professional Edition | Industrial Edition |
---|---|---|---|

Optimization under uncertainty | Combines Monte Carlo simulation with sophisticated optimization techniques to find optimal solutions to uncertain problems. Used for budgeting, allocation, scheduling, and more. | ||

Efficient Frontier Analysis | Especially useful in financial analysis, Efficient Frontiers determine the optimal return that can be expected from a portfolio at a given level of risk | ||

Ranges for adjustable cells and constraints | Streamlined model setup and editing | ||

Genetic algorithms | Find the best global solution while avoiding getting caught in local, “hill-climbing” solutions | ||

Six solving methods, including GAs and OptQuest | Always have the best method for different types of problems | ||

RISKOptimizer Watcher and Convergence Monitoring | Monitor progress toward best solutions in real time | ||

Overlay of Optimized vs Original Distribution | Compare original output to optimized result to visually see improvements | ||

Original, Best, Last model updating | Instantly see the effects of three solutions on your entire model |

Types of Optimization Models

Beyond simple linear versus nonlinear classification of optimization problems, there are a number of other dimensions to this type of analysis.

Unconstrained Versus Constrained Optimization Models

Most practical optimization problems involve constraints of some kind – real-life limitations such as budget ceilings, schedules, or resource availability. These are called constrained optimization models. Sometimes, however, unconstrained optimization techniques arise, especially as a way to revisit a constrained model. Often, a constrained optimization analysis may not produce results that are good enough, and so the “ideal” constraints must be removed or relaxed, and the model reconsidered. When this happens, constraints can be replaced by penalty functions which allow the formerly “illegal” values to be considered, but apply some kind of “penalty,” such as an additional cost, when they occur. In this way, more realistic situations and options can be modeled.

Continuous Versus Discrete Optimization Models

In some optimization models, the variables in question lend themselves to a defined, limited set of possible values – often integers. You can only select full integers of people for a production schedule, for example. These are discrete optimization models. Other models contain variables that can take on any value. For instance, you could invest any amount of dollars and cents in a given asset class of a portfolio. These are continuous optimization models. Continuous optimization problems tend to be easier to solve than discrete optimization problems because the availability of so many values enable algorithms to better infer data about other, better possible solutions. However, improvements in algorithms and computing technology have made even complex discrete optimization problems more solvable than ever.

No-Objective, Single-Objective, and Multi-Objective Optimization Models

Most optimization problems have a single goal (or objective function) to solve – minimize a cost, or maximize a return, for example. However, there are cases when optimization models have no objective function. In feasibility problems, the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. By contrast, multi-objective optimization problems arise as well, in fields such as engineering, economics, and logistics. In these cases, optimal decisions need to be made while considering trade-offs between two or more conflicting objectives. For example, developing a new industrial component might involve minimizing weight while maximizing strength, or choosing a financial portfolio might involve maximizing the expected return while minimizing risk. These problems are modeled in optimization software as single objective models by either creating a weighted combination of the different objectives or by replacing some of the objectives with constraints.

Stochastic Versus Deterministic Optimization Models

In deterministic optimization, it is assumed that all the data for the given model are known with certainty. However, for many actual problems, the data cannot be known accurately because they represent unknown information about the future (for example, product demand or price for a future time period). In stochastic optimization, or optimization under uncertainty, such uncertainty is incorporated into the model. Probability distributions describing the unknown data can be estimated, and then a Monte Carlo simulation is run for each trial solution the optimization algorithm selects. In this way, a statistic of the simulated solution is optimized – for instance, you may want to minimize the standard deviation of the results to reduce risk. The goal is to find some policy that is feasible for all (or almost all) the possible outcomes and optimizes the expected performance of the model.

Feature | Benefit | Professional Edition | Industrial Edition |
---|---|---|---|

Optimization under uncertainty | Combines Monte Carlo simulation with sophisticated optimization techniques to find optimal solutions to uncertain problems. Used for budgeting, allocation, scheduling, and more. | ||

Efficient Frontier Analysis | Especially useful in financial analysis, Efficient Frontiers determine the optimal return that can be expected from a portfolio at a given level of risk | ||

Ranges for adjustable cells and constraints | Streamlined model setup and editing | ||

Genetic algorithms | Find the best global solution while avoiding getting caught in local, “hill-climbing” solutions | ||

Six solving methods, including GAs and OptQuest | Always have the best method for different types of problems | ||

RISKOptimizer Watcher and Convergence Monitoring | Monitor progress toward best solutions in real time | ||

Overlay of Optimized vs Original Distribution | Compare original output to optimized result to visually see improvements | ||

Original, Best, Last model updating | Instantly see the effects of three solutions on your entire model |

During a Monte Carlo simulation, values are sampled at random from the input probability distributions. Each set of samples is called an iteration, and the resulting outcome from that sample is recorded. Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes. In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen. It tells you not only what could happen, but how likely it is to happen.

Monte Carlo simulation provides a number of advantages over deterministic, or “single-point estimate” analysis:

An enhancement to Monte Carlo simulation is the use of Latin Hypercube sampling, which samples more accurately from the full range of values within distribution functions and produces results more quickly.