The University of Pretoria, South Africa, in collaboration with researchers from Utrecht University and the University of New Mexico, has used Lumivero's (previously Palisade) DecisionTools Suite software to create a low-cost and easily implementable model to estimate foodborne contaminations and human infections from the avian influenza H5N1 virus in Egypt and Nigeria. The output of this surveillance model, when combined with data generated by several international health organizations, will enable other African countries to better predict, monitor and intensify efforts to eradicate the spread of this highly contagious disease from animals to humans. This work was covered in the article, Development of Disease-specific, Context-specific Surveillance Models: Avian Influenza (H5N1)-related Risks and Behaviours in African Countries, published in April 2015.
The avian influenza virus – or avian flu – is a fast-spreading infection that affects poultry and potentially people worldwide. While the virus has already adapted to other mammals, including ferrets, guinea pigs and cats, the risk to humans is still not completely understood. This makes monitoring and decreasing the rate of contact between infected poultry and humans critical – in particular, stopping exposure to the virus through the production and preparation processes of contaminated food. According to Dr. Folorunso Oludayo Fasina, a senior lecturer at the University of Pretoria’s Department of Production Animal Studies, it is critical to understand “how the virus gets into the food system, how it spreads and how it can be managed. To do this, we need risk assessment and exposure assessment, as well as a response model. Once we have this information, we can implement measures to stop the risks.”
As the University’s Department of Production Animal Studies has significant expertise in disease modelling and risk prediction as part of their epidemiological work, they allowed Dr. Fasina and his colleagues to create a model for foodborne contamination that was specific to Africa, where the virus has already infected 12 countries. The team studied both biological and cultural aspects, including food processing, trade and cooking-related practices, and collected data from more than 375 Egyptian and Nigerian sites including homes, local producers, live bird markets, village and commercial abattoirs and veterinary agencies. According to Dr. Fasina, “We took a ‘from the farm to the fork’ approach, and considered farms as well as livestock markets.”
“Risk mitigation and risk prediction remain some of the most useful tools with which to effectively control the continuous perpetuation of outbreaks in newer territories and eradicate it where it currently exists,” explained Dr. Fasina. However building this new model wasn’t an easy task, taking nearly two years to complete. Most of the existing information was qualitative, which made it difficult to set quantitative parameters, and the quantitative data they did find was inconsistent as it was often out of date, only available for other types of influenza or had been censored by the government. However after attending a training session for DecisionTools Suite in 2013, Dr. Fasina decided to use the software to generate the quantitative values they needed.
The team considered several factors with their model, from the concentration levels of the virus in infected meat and the likelihood of contamination between infected and non-infected meat, to differences between genders and age groups with regard to risk exposure. “We asked a lot of questions to generate this data,” explained Dr. Fasina. “This generated a significant amount of output, which required sensitivity analysis and some triangulation.” As a first step, the team used TopRank tool, part of the DecisionTools Suite, to analyze the sensitivity of each of the identified contributors to the overall risk. This helped the team understand which of the contributors were the most important.
Next, the team moved to the @RISK tool in the DecisionTools Suite to help predict the different ways the virus could be spread. Using Monte Carlo simulation, @RISK can quantify the probabilities of different outcomes – or infection rates – occurring, as well as determine the optimal preventive measures to mitigate the risk of animal-to-person infection. The team used six statistical probability distributions within @RISK to represent different inputs – or risk factors – for their model. They combined the simulated outputs from @RISK with statistical analysis to complete the model, using social data and outbreak information, including human demographic structures in Africa, socio-cultural and behavioral economics and knowledge, and attitude and perceptions or risks within those countries being investigated.
The results revealed numerous opportunities for the avian influenza virus to be spread, and found that the estimated risk for humans was higher than previously reported. “It is very easy for us to miss the influence of viral infections on a community, due to lack of awareness and under-reporting, so people may be more at risk than we’re aware of,” explained Dr. Fasina. “@RISK is a valuable tool to investigate these problems and do risk predictions either prospectively or retrospectively. Utilizing the outputs from models like this can help health policy planners and public health officials to take anticipatory measures to prevent future disasters associated with infectious diseases like the avian flu.”
Originally posted: Feb. 8, 2017
Updated: June 7, 2024
Professor Katsuaki Sugiura of the University of Tokyo uses Monte Carlo simulation in @RISK to improve the bovine spongiform encephalopathy (BSE) surveillance program in Japan, to improve food safety.
Professor Katsuaki Sugiura at the Laboratory of Global Animal Resource Science at the Graduate School of Agriculture and Life Sciences, the University of Tokyo, has used Lumivero's (previously Palisade) @RISK software since 1995 in his research activities. He has used the risk software to assess risk in the import of animals and livestock products and in food safety risk assessment. Many researchers in the specialized field of veterinary epidemiology also use @RISK, making it easy to utilize in joint research activities.
His current research is on bovine spongiform encephalopathy (BSE) – a progressive and fatal nervous disease found mainly in adult dairy cattle. The cause of BSE is oral exposure to what’s known as an abnormal prion protein. BSE is particularly worrisome because it is transmitted through meat-and-bone meal (MBM). MBM is derived from unwanted animal slaughter products and fallen stock that are cooked to give off water as steam. It is characterized by long incubation periods (2-8 years with 5 year average).
The first case of BSE in Japan was confirmed in September 2001, and a number of measures were taken to protect animal health as well as public health. One of these measures was the testing for BSE of all cattle slaughtered for human consumption from October 2001. From April 2004, all fallen-stock (cattle that died on farms or during transport) older than 24 months were also tested. As a result, through the end of 2012, 36 cows were diagnosed with BSE, from a total of 14 million heads of cattle slaughtered for human consumption and 910,000 fallen-stock carcasses tested. There are several diagnostic tests for BSE. The currently available diagnostic tests involve detection of the abnormal prion protein. Normal prion protein exists along with abnormal prion protein in the brainstems of BSE-infected cattle. The detection of abnormal prion protein is done by using proteinase-K, which digests normal prions but leaves the abnormal prions intact. But there are limits to this BSE diagnostic. At the end of the incubation period the abnormal prion proteins accumulate in the brain; in other words, even if a cow is infected it cannot be detected unless it is just before the onset of the disease. The test cannot detect the infected cattle that are slaughtered or have died from other causes before the end of the incubation period. Since the incubation period is long and varies between 2 and 8 years, the age of clinical onset is not fixed, and the age at which cattle may die or be slaughtered varies.
In Japan, all cattle butchered for human consumption are tested for BSE, as well as all cattle that died over 24 months old. However, due to the variability of the age of slaughter and death, and duration of incubation period and the limited detection capability of the diagnostic test, Professor Sugiura uses Monte Carlo simulation in @RISK to improve the surveillance program. He builds stochastic models that predict how changing the testing age of the cattle will impact the number of cattle tested and BSE infected cattle detected.
@RISK uses Monte Carlo simulation in Microsoft Excel to perform risk analysis, utilizing mathematical techniques to help better understand risks through quantitative analysis and to improve decision-making. The software can calculate the probability and impact of various possible consequences. Users can then grasp the associated risk and probability for each scenario, by tracking and objectively calculating different potential scenarios based on the formulas.
The thinking behind BSE testing model was as follows:
Four surveillance strategies were explored for cattle slaughtered for human consumption, with the minimum age at testing set at 0, 21, 31, or 41 months. Three surveillance strategies were explored for fallen stock, with the minimum age at testing set at 24, 31, or 41 months. Increasing the minimum age of testing from 0 to 21 months for both dairy cattle and Wagyu beef cattle had very little impact on the probability that a BSE-infected animal slaughtered for human consumption would be detected. Although increasing the minimum age at testing from 21 to 31 or 41 months would lead to fewer slaughtered animals being tested, the impact on the probability of detecting infected animals would be insignificant. The probability of infected Wagyu-Holstein crosses and Holstein steers being detected at slaughter or as fallen stock would be very low under all surveillance strategies.
Professor Sugiura said about @RISK, “The key point is that without having to learn any new programming language we are able to construct models right in Microsoft Excel and process them visually.” The insights provided by @RISK in Professor Sugiura’s work enables researchers to eliminate testing age as an important factor so they can focus on other, more effective factors.
Originally published: Dec. 17, 2021
Updated: June 7, 2024
Professor Katsuaki Sugiura of the University of Tokyo uses Monte Carlo simulation in @RISK to improve the bovine spongiform encephalopathy (BSE) surveillance program in Japan, to improve food safety.
Professor Katsuaki Sugiura at the Laboratory of Global Animal Resource Science at the Graduate School of Agriculture and Life Sciences, the University of Tokyo, has used Lumivero's (previously Palisade) @RISK software since 1995 in his research activities. He has used the risk software to assess risk in the import of animals and livestock products and in food safety risk assessment. Many researchers in the specialized field of veterinary epidemiology also use @RISK, making it easy to utilize in joint research activities.
His current research is on bovine spongiform encephalopathy (BSE) – a progressive and fatal nervous disease found mainly in adult dairy cattle. The cause of BSE is oral exposure to what’s known as an abnormal prion protein. BSE is particularly worrisome because it is transmitted through meat-and-bone meal (MBM). MBM is derived from unwanted animal slaughter products and fallen stock that are cooked to give off water as steam. It is characterized by long incubation periods (2-8 years with 5 year average).
The first case of BSE in Japan was confirmed in September 2001, and a number of measures were taken to protect animal health as well as public health. One of these measures was the testing for BSE of all cattle slaughtered for human consumption from October 2001. From April 2004, all fallen-stock (cattle that died on farms or during transport) older than 24 months were also tested. As a result, through the end of 2012, 36 cows were diagnosed with BSE, from a total of 14 million heads of cattle slaughtered for human consumption and 910,000 fallen-stock carcasses tested. There are several diagnostic tests for BSE. The currently available diagnostic tests involve detection of the abnormal prion protein. Normal prion protein exists along with abnormal prion protein in the brainstems of BSE-infected cattle. The detection of abnormal prion protein is done by using proteinase-K, which digests normal prions but leaves the abnormal prions intact. But there are limits to this BSE diagnostic. At the end of the incubation period the abnormal prion proteins accumulate in the brain; in other words, even if a cow is infected it cannot be detected unless it is just before the onset of the disease. The test cannot detect the infected cattle that are slaughtered or have died from other causes before the end of the incubation period. Since the incubation period is long and varies between 2 and 8 years, the age of clinical onset is not fixed, and the age at which cattle may die or be slaughtered varies.
In Japan, all cattle butchered for human consumption are tested for BSE, as well as all cattle that died over 24 months old. However, due to the variability of the age of slaughter and death, and duration of incubation period and the limited detection capability of the diagnostic test, Professor Sugiura uses Monte Carlo simulation in @RISK to improve the surveillance program. He builds stochastic models that predict how changing the testing age of the cattle will impact the number of cattle tested and BSE infected cattle detected.
@RISK uses Monte Carlo simulation in Microsoft Excel to perform risk analysis, utilizing mathematical techniques to help better understand risks through quantitative analysis and to improve decision-making. The software can calculate the probability and impact of various possible consequences. Users can then grasp the associated risk and probability for each scenario, by tracking and objectively calculating different potential scenarios based on the formulas.
The thinking behind BSE testing model was as follows:
Four surveillance strategies were explored for cattle slaughtered for human consumption, with the minimum age at testing set at 0, 21, 31, or 41 months. Three surveillance strategies were explored for fallen stock, with the minimum age at testing set at 24, 31, or 41 months. Increasing the minimum age of testing from 0 to 21 months for both dairy cattle and Wagyu beef cattle had very little impact on the probability that a BSE-infected animal slaughtered for human consumption would be detected. Although increasing the minimum age at testing from 21 to 31 or 41 months would lead to fewer slaughtered animals being tested, the impact on the probability of detecting infected animals would be insignificant. The probability of infected Wagyu-Holstein crosses and Holstein steers being detected at slaughter or as fallen stock would be very low under all surveillance strategies.
Professor Sugiura said about @RISK, “The key point is that without having to learn any new programming language we are able to construct models right in Microsoft Excel and process them visually.” The insights provided by @RISK in Professor Sugiura’s work enables researchers to eliminate testing age as an important factor so they can focus on other, more effective factors.
Originally published: Dec. 17, 2021
Updated: June 7, 2024
Many California produce farm operations use a rule-of-thumb to determine a hedge ratio for their seasonal productions. They often aim to contract 80% of their crop in advance to buyers at set prices, leaving the remaining 20% to be sold at spot prices in the open market. The rationale for this is based on many years of experience that indicates costs and a reasonable margin can be covered with 80% of production hedged by forward contracts. The hope is the remaining 20% of production will attract high prices in favorable spot markets, leading to substantial profits on sales. Of course, it is understood spot prices might not be favorable, in which case any losses could be absorbed by the forward sales.
Since the Recession of 2008, agricultural lenders and government regulators have recognized that many farm operators need to manage the risks to their margins, and free cash flows, rather than simply focusing revenue risks. A more quantitative analysis is needed to determine risks in the agricultural industry.
Agribusiness experts from Cal Poly conducted a risk management analysis using @RISK, and found the 80% hedge ratio rule-of-thumb is not as effective as assumed. Growers do not profit from spot market sales over the long run. The analysis shows growers are better off in the long-term selling as much of their product as possible using forward contracts.
Agriculture in California is big business. In 2013, nearly 80,000 farms and ranches produced over 400 commodities – the most valuable being dairy, almonds, grapes, cattle, and strawberries – worth $46.4 billion. Almost half of this value came from exports. The state grows nearly half of the fruits, nuts, and vegetables consumed in the United States. Yet agriculture is traditionally one of the highest risk economic activities.
Steven Slezak, a Lecturer in the Agribusiness Department at Cal Poly, and Dr. Jay Noel, the former Agribusiness Department Chair, conducted a case study on an iceberg lettuce producer that uses the rule-of-thumb approach to manage production and financial risks. The idea was to evaluate the traditional rule-of-thumb method and compare it to a more conservative hedging strategy.
The grower uses what is known as a ‘hedge’ to lock in a sales price per unit for a large portion of its annual production. The hedge consists of a series of forward contracts between the grower and private buyers which set in advance a fixed price per unit. Generally, the grower tries to contract up to 80% of production each year, which stabilizes the grower’s revenue stream and covers production costs, with a small margin built in.
The remaining 20% is sold upon harvest in the ‘spot market’ – the open market where prices fluctuate every day, and iceberg lettuce can sell at any price. The grower holds some production back for spot market sales, which are seen as an opportunity to make large profits. “The thinking is, when spot market prices are high, the grower can more than make up for any losses that might occur in years when spot prices are low,” says Slezak. “We wanted to see if this is a reasonable assumption. We wanted to know if the 80% hedge actually covers costs over the long-term and if there are really profits in the spot market sales. We wanted to know if the return on the speculation was worth the risk. We found the answer is ‘No’.”
This is important because growers often rely on short-term borrowing to cover operational costs each year. If free cash flows dry up because of operational losses, growers become credit risks, some cannot service their debt, agricultural lending portfolios suffer losses, and costs rise for everybody in the industry. Is it a sound strategy to swing for the fences in the expectation of gaining profits every now and then, or is it better to give up some of the upside to stabilize profits over time and to reduce the probability of default resulting from deficient cash flows?
Slezak and Noel turned to @RISK to determine an appropriate hedge ratio for the grower.
For inputs, they collected data on cultural and harvest costs. Cultural costs are the fixed costs “necessary to grow product on an acre of land,” such as seeds, fertilizer, herbicides, water, fuel etc., and tend to be more predictable. The researchers relied on the grower’s historical records and information from county ag commissioners for this data.
Harvest costs are much more variable, and are driven by each season’s yield. These costs include expenses for cooling, palletizing, and selling the produce. To gather data on harvest costs for the @RISK model, Slezak and Noel took the lettuce grower’s average costs over a period of years along with those of other producers in the area, and arrived at an average harvest cost per carton of iceberg lettuce. These costs were combined with overhead, rent, and interest costs to calculate the total cost per acre. Cost variability is dampened due to the fact that fixed costs are a significant proportion of total costs, on a per acre basis.
The next inputs were revenue, which are defined as yield per acre multiplied by the price of the commodity. Since cash prices vary, the grower’s maximum and minimum prices during the previous years were used to determine an average price per carton. Variance data were used to construct a distribution based on actual prices, not on a theoretical curve.
To model yield, the grower’s minimum and a maximum yields over the same period were used to determine an average. Again, variance data were used to construct a distribution based on actual yields.
StatTools, included in DecisionTools Suite, was used to create these distribution parameters. @RISK was used to create a revenue distribution and inputs for the model. With cost and revenue simulation completed, the study could turn next to the hedge analysis.
Since the question in the study is about how best to manage margin risk – the probability that costs will exceed revenues – to the point where cash flows would be insufficient to service debt, it was necessary to compare various hedge ratios at different levels of debt to determine their long-term impact on margins. @RISK was used to simulate combinations of all costs and revenue inputs using different hedge ratios between 100% hedging and zero hedging. By comparing the results of these simulation in terms of their effect on margins, it was possible to determine the effectiveness of the 80% hedging rule of thumb and the value added by holding back 20% of production for spot market sales.
Unsurprisingly, with no hedge involved, and all iceberg lettuce being sold on the sport market, the simulation showed that costs often exceeded revenues. When the simulation hedged all production, avoiding spots sales completely, the costs rarely exceeded revenues. Under the 80% hedge scenario, revenues exceeded costs in most instances, but the probability of losses significant enough to result in cash flows insufficient to service debt was uncomfortably high.
It was also discovered that the 20% of production held back for the purpose of capturing high profits in strong markets generally resulted in reduced margins. Only in about 1% of the simulations did the spot sales cover costs, and even then the resulting profits were less than $50 per acre. Losses due to this speculation could be as large as $850 per acre. A hedging strategy designed to yield home runs instead resulted in a loss-to-gain ratio of 17:1 on the unhedged portion of production.
Slezak and his colleagues reach out to the agribusiness industry in California and throughout the Pacific Northwest to educate them on the importance of margin management in an increasingly volatile agricultural environment. “We’re trying to show the industry it’s better to manage both revenues and costs, rather than emphasizing maximizing revenue,” he says. “While growers have to give up some of the upside, it turns out the downside is much larger, and there is much more of a chance they’ll be able to stay in business.”
In other words, the cost-benefit analysis does not support the use of the 80% hedged rule-of-thumb. It’s not a bad rule, but it’s not an optimal hedge ratio.
Professor Slezak is a long-time user of @RISK products, having discovered them in graduate school. In 1996, “a finance professor brought the software in one day and said, ‘if you learn this stuff you’re going to make a lot of money,’ so I tried it out and found it to be a very useful tool,” he says. Professor Slezak has used @RISK to perform economic and financial analysis on a wide range of problems in industries as diverse as agribusiness, energy, investment management, banking, interest rate forecasting, education, and in health care.
Originally published: Oct. 13, 2022
Updated: June 7, 2024
The US Government spent $24 billion on farm programs in 2000. The benefits were distributed through a variety of income-enhancing and risk-reducing policies. For instance, some programs provided price protection, others a direct subsidy, and others subsidized the purchase of crop insurance.
How do these programs individually and collectively reduce agricultural risk? Are these programs having the desired effects? Is the money helping producers reduce risk and thereby providing a disincentive to purchase crop insurance? And are there better ways to address agricultural risk?
Researchers at Cornell University and Purdue University recently completed a study that assesses the impacts of US farm programs on farm returns and answers some of the above questions. The researchers used Lumivero's (previously Palisade) @RISK to create a simulation model that illustrates the effects of government policy tools on farm incomes.
To assess the impacts of government policy tools on farmland risk and return the researchers developed an economic model for farm income and expenses based on a representative parcel of land and crop mix. @RISK allowed the researchers to model the uncertainties associated with crop yield and price. After running base-line simulations, the researchers added the individual farm programs into the model to determine their impacts. Finally they combined all the programs and crop insurance into the model. They compared the simulation outcomes to determine the impact of the various payment/subsidy mechanisms.
The @RISK simulation model demonstrated that a combination of all government programs would raise average farm incomes by almost 45%. Additionally, the programs would reduce the economic risks associated with farming by half. Most importantly, the model allowed researchers to examine how the programs interact with one another to alter the return distribution.
Producers must make decisions regarding cash rental bids, crop mixes, and even farmland purchases based upon expected returns. These decisions are based on the expectations for market prices, yields and costs – all uncertain elements.
Assistant Professor Brent Gloy of Cornell University’s Applied Economics and Management Department was one of the researchers. “@RISK was vital to the simulation model. It allowed us to incorporate uncertainties and run random simulations on the various scenarios.” He adds, “@RISK’s ability to correlate distributions of random variables was essential to the model. Additionally, we used @RISK’s output statistics to compare the various model scenarios.”
The study quantifies how government programs impact each other and subsidized crop insurance. According to Dr. Gloy, “Our results indicate that the risk reduction provided by the standard programs significantly reduce the value that risk averse producers derive from crop insurance programs.” He adds, “@RISK was instrumental to the simulation model. It allowed us to incorporate uncertainties and correlations, and to systematically evaluate each of the farm policy tools.”
The study was recently published in the Review of Agricultural Economics. For more information about the study, contact Brent Gloy at 607-255-9822 or bg49@cornell.edu.
Updated: May 2, 2024
Originally Published: Oct. 28, 2022
Healthcare industry consultant Barbara Tawney had a tough task ahead of her. She needed to forecast patient loads for the entire metropolitan hospital system of Richmond, Virginia. Every hospital has a finite number of beds and therefore, a maximum capacity. But unpredictable patient demand throughout the system had resulted in two occasions when all nine hospitals in the system had reached capacity and patients had to be diverted to healthcare facilities outside the area. To figure out how to anticipate and prepare for surges in patient load, Tawney turned to Palisade’s StatTools and NeuralTools data analysis products.
Tawney maintains an active consulting practice and is a Ph.D. candidate in the Systems and Information Engineering Department of the University of Virginia’s School of Engineering and Applied Science. She specializes in data analysis––particularly large data sets––and could have easily written her own software for the patient load project. Instead, she began her work with Palisade’s StatTools to tackle the mountain of data she was facing.
With the cooperation of Virginia Health Information (VHI), a non-profit organization that collects and warehouses all the healthcare data statewide, she was granted limited access to metropolitan Richmond patient data for the four years from 2000 to 2003. Time series data were derived from hospital billing information for about 600,000 patients being treated at area hospitals during 2000-2003. The patient level data (PLD) were detailed in 78 different fields, including dates of admission and discharge, diagnosis, and length of stay.
According to Tawney, “I was looking for a user-friendly way to do autocorrelation, and a colleague recommended StatTools to me.” She created time series for the data by “binning” the PLD according to the dates and times of activity for each case. The time series data were analyzed for daily, weekly and event trends. As Tawney observes, “StatTools does the time series autocorrelation in a user-friendly way that is quick and easy. You know you got it right the first time. I have box plots and other statistics that I did with it, and they were easy to refine for publication. StatTools did what I needed without the time and expense of a heavy-duty stats package.”
After she analyzed the historical data with StatTools, it was time for Tawney to predict future patient loads using NeuralTools. She began by “training” NeuralTools on the existing data. Additional daily, weekly, and event trends, along with unusual days, stood out during the NeuralTools analyses. For instance, Tawney determined that patient load peaked at mid-week during most weeks of the year. Holiday periods also have a different, distinctive pattern. The number of patients entering the hospitals just before and during the Thanksgiving holiday was lower than normal but was followed on Monday by an influx of patients that stretched the facility’s resources. Similarly, patient load dropped markedly throughout the double holiday of Christmas and New Year’s. But each year there was a significant surge in demand on the Monday-Tuesday of the first full week of the New Year.
For hospital planners and administrators, Tawney’s findings provide the basis for predicting patient load throughout the Richmond metropolitan hospital system. These predictions range from a few days to several months. Being able to predict the patient demand allows for more efficient allocation of system resources, including scheduling of services. According to Tawney, the project also led to another important discovery: NeuralTools is so accessible that it can stay on the job long after she has left. “Most folks in the medical community are not engineers,” she says, “but they can use NeuralTools to facilitate their own forecasts of future admissions, current patient demands, and the need for timely discharges using existing patient billing data. To bring this kind of forecasting to non-engineering managers is just awesome!”
Australian wool is a big deal—or more precisely, it’s a big deal made up of lots of little deals. With an annual value of AUD 3 billion, Australia’s wool makes up 70 percent of the world’s raw wool used in clothing. It is still marketed in lots by the traditional mode, and each year more than 450,000 farm lots are sold at open cry auction. It’s a very risky marketplace for both the farmers and the buyers who contract in advance to deliver wool to processors.
Although the industry sets a market indicator of the price for different types of wool, there is a need for a numerical model to estimate probable price for each individual lot using its own measurements. Models based on regression analysis have worked satisfactorily for very common types of wool, but can’t be applied to the many other types that come on the market. Recently Kimbal Curtis, a wool industry specialist with the Western Australia Department of Agriculture, began to use NeuralTools to predict market prices for farm lots of less abundant wool types.
It was an ideal challenge for NeuralTools. Because of detailed market recording, the number of records was large but there were also missing data; prices were dynamic; and the relationship of price to wool characteristics was nonlinear and interactive, as well as being dynamic. Curtis began his model by training NeuralTools on a set of nearly 6,000 records from a six-month period. For this purpose he established independent category variables for such factors as place of sale, date, and qualitative aspects of the wool that affect price. He set as independent numerical variables the measurable physical characteristics of the wool. He then used the NeuralTools feature Best Net Search to determine that the best computational mode was Generalized Regression Neural Networks.
After testing his neural net on over 1,500 cases, Curtis evaluated the predictive capability of his emerging model by comparing the model results with real-world prices and by using the Variable Impact Analysis function in NeuralTools to determine which variables exerted the most influence on the model’s accuracy. He was able to pinpoint the diameter of the wool fiber as pivotal and simplify his model by discarding non-influential variables. Curtis then refined his model by using the Live Prediction feature – which updates neural network predictions in real-time as input data changes – to investigate the relationships among price, length of wool staple, and strength of wool staple for various diameters of fiber. A few more similar refinement steps and the result is a spreadsheet model that not only produces reliable predictions of wool prices but allows buyers and sellers to explore the price implications of the independent variables.
“NeuralTools ably dealt with the complexities of the problem,” Curtis reported, “freeing me to concentrate on the relationships it found and to compare these with our experience of the wool market.” And he had high praise for the software’s user-friendliness. “The thing I really value in an analytical package is the ability to use it to solve real problems without the process itself becoming a problem. Once I understood the analytical options and chose the appropriate set for my purpose, NeuralTools delivered.”
Infectious disease is an important cause of lost production and profits to beef cow-calf producers each year. Beef producers commonly import new animals into their herds, but often do not properly apply biosecurity tools to economically decrease risk of disease introduction. Dr. Michael Sanderson, a professor of Beef Production and Epidemiology at Kansas State University’s (KSU) College of Veterinary Medicine, wanted to address this issue by developing a risk management tool for veterinarians and beef cow-calf producers to assist in identifying biologically and economically valuable biosecurity practices, using @RISK.
The college was established in 1905, and has granted more than 5,000 Doctor of Veterinary Medicine degrees. Departments within the College of Veterinary Medicine include anatomy and physiology, clinical sciences, diagnostic medicine, and pathobiology. The college's nationally recognized instructional and research programs provide the highest standards of professional education. A rich, varied, and extensive livestock industry in the region, a city with many pets and a zoo, and referrals from surrounding states provide a wealth of clinical material for professional education in veterinary medicine.
Reproductive disease is an important cause of lost production and economic return to beef cow-calf producers, causing estimated losses of $400 to $500 million dollars per year. Because of the complex nature of the production system, the biologically and economically optimal interventions to control disease risk are not always clear. Dr. Sanderson and his team (including Drs. Rebecca Smith and Rodney Jones) utilized @RISK to model the probability and economic costs of disease introduction and the cost and effectiveness of management strategies to decrease that risk.
“For this project, @RISK was essential to model variability and uncertainty in risk for disease introduction and impact following introduction, as well as variability and uncertainty in effectiveness of mitigation strategies,” said Dr. Sanderson. “Further, @RISK was crucial for sensitivity analysis of the most influential inputs to refine the model and to identify the most important management practices to control risk. It was also valuable to aggregate results into probability distributions for risk and economic cost over one-year and ten-year year planning periods.“
The project modelled the risk of introduction of the infectious disease Bovine Viral Diarrhea (BVD) into the herd, the impact of disease on the herd (morbidity, mortality, abortion, culling, lost weight) and economic control costs. These risks were aggregated over ten years to identify the optimal management strategy to minimize cost from BVD accounting for both production costs and control costs.
Probability distributions included:
Target probabilities were utilized to produce the probability of exceeding a certain cost over one and ten years, provide this data as a single number for each management option and generate descending cumulative probability distributions for exceeding any particular cost value.
As a result of the risk identification insight gained from the research, Dr. Sanderson and his team were able to improve disease management and controls by identifying:
“Our utilization of @RISK gave us the ability to account for complex aggregation of inputs and their variability and uncertainty to produce full-outcome probability distributions for more informed decision making,” said Dr. Sanderson. “Further, the ability to use research data from multiple parts of the beef production system and combine those results into a model that accounts for the complexity of the production systems allows recognition of emergent phenomena and decision making based on the full system, rather than only one part. The flexibility to customize outputs provided the most valuable information for decision making.”
Palisade’s risk analysis software @RISK is being used by aquatic veterinary surgeons to demonstrate the practice of biosecurity to aquatic farmers. The method helps to reduce the potential for disease in animals without incurring the significant costs of extensive testing. Only a small number of data inputs are required for @RISK, with thousands of simulations then presenting accurate results that inform decision-making.
It is estimated that the human population will be nine billion by 2030. The Food and Agriculture Organization (FAO) believes that aquaculture, which currently provides around half of the fish and shellfish eaten around the world, is the only agricultural industry with the potential to meet the protein requirements of this population. However, one of the biggest constraints to achieving this is the depletion of stock levels through disease. In 1997, the World Bank estimated that annual losses amounted to $3 billion, and current figures suggest that 40 percent of insured losses are due to disease.
Biosecurity measures, which aim to prevent, control and ideally eradicate disease are regarded as essential. However, encouraging the adoption of these practices is often difficult due to the farmers’ levels of education, training, responsibility and perceived economic benefits. In addition, global estimates of disease losses may appear remote and irrelevant to farmers and producers faced with making a rational choice from scarce data and, often scarcer, resources.
Dr Chris Walster is a qualified veterinary surgeon with a long-standing interest in aquatic veterinary medicine, and is the secretary to the World Aquatic Veterinary Medical Association (WAVMA). Having seen Palisade’s risk analysis tool, @RISK, demonstrated, he started using it to calculate the realistic risk of aquatic disease to farms, with a focus on cases where data inputs were limited.
@RISK’s capacity to present the calculations in graphs that are easy to understand also makes it straightforward for vets to show farmers disease risk probabilities. With this information readily available, the cost/benefit of disease prevention can be calculated, and farmers can make informed choices about whether to put controls in place.
For example, a farmer might plan to import 1000 fish to their farm. The cost to accurately determine the disease status of these fish may be uneconomic, but testing a small sample will not give sufficient evidence on which to base an informed purchase decision.
However, testing 30 of the fish and running further simulations using @RISK will give the probability of how many fish might be diseased if more were tested. In other words it provides the farmer with a more accurate picture of the risk connected to purchasing the stock.
If there is no information as to whether the fish carry a disease of interest, testing 30 of them would be expected to return the results that 15 are diseased and 15 are not (a disease prevalence of 0.5 must be assumed, giving a 50/50 probability). However, because tests are rarely 100% accurate, when interpreting a test result, its validity, or how well it performs must also be accounted for. This requires knowing the test characteristics, sensitivity (test positive and truly disease positive) and specificity (test negative and truly disease negative) along with the disease prevalence (or likelihood).
Introducing a sensitivity of 80% for example reduces the fish testing positive to twelve (15 x 0.8). In this case, using a specificity of 98% the simulation is run 10,000 times to produce enough ‘values’, and these are used to produce a graph showing likely minimum, maximum and mean prevalence of the disease.
This simple example helps to generate understanding amongst farmers that they do not need to undertake extensive testing programs to obtain accurate results about disease levels in fish.
Further evidence can be gathered by running more tests that supplement the @RISK distribution graphs with prior knowledge – facts that are already known and accepted. For example, international regulations make it illegal to transport sick animals. Therefore, if a particular disease shows obvious symptoms, it seems reasonable to assume (using human expertise) that the prevalence of the disease is no higher than 10%, or the seller would have noticed that the fish were sick and could not be sold. Once again, only 30 fish are tested, but this time @RISK is used for a PERT distribution with expert opinion introducing a minimum of 1%, most likely of 5% and maximum of 10%. Running the @RISK simulation 10,000 times again to produce significantly more values can change the results significantly.
With this knowledge, the farmer can now decide on the next course of action. They may decide they are happy with the potential risk and buy the fish. Equally they may want more certainty and therefore test more fish or use additional tests. Finally they may feel that the risk is too great and research other sources.
“@RISK enables farmers to reduce the risk of disease spreading amongst their animals whilst minimizing additional costs,” Walster explains: “For aquatic vets, the key is the graphs which allow us to demonstrate a complex probability problem quickly and simply in a way that is easy to understand and trust. These inform decision-making, thereby helping to boost the world’s aquatic stock whilst safeguarding farmers’ livelihoods.”
“This technique also potentially offers an economical method of assisting in the control of many diseases. Farmers undertake their own tests, with each of these providing incremental inputs so that the macro picture can be developed and acted upon,” concludes Walster.
The Listeria monocytogenes bacteria (L. monocytogenes) is a very severe bacteria that is a major cause of food poisoning, potentially leading to premature birth miscarriages and even cases of meningitis and other serious health problems. It is reported to occur mainly in unpasteurized natural cheese, meats, vegetables and fruits. In France reported cases of the L. monocytogenes found in pasteurized liquid eggs have made it essential to better understand just what the risk is. Kewpie, one of Japan’s largest food ingredient manufacturers and a major egg product producer, teamed up with the University of Tokyo’s Graduate School of Agricultural and Life Sciences Research Center for Food Safety to develop a growth model of bacteria in liquid eggs in order to increase their understanding of the risks.
From its inception, Kewpie has followed the spirit of “Good products are only made from good ingredients,” and that food safety is something that must always be strived for to the best of their abilities, as explained by Miho Okochi, who is part of Kewpie Corporation’s R&D Center Food Safety Division Microbial Laboratory. “In our laboratory we focus on technology and research of microorganisms necessary in product development, and microorganisms that may contaminate products,” he explained.
Since contamination growth research for chicken eggs did not previously exist, the lab first conducted research on the L. monocytogenes contamination rate in unpasteurized liquid eggs. The results showed a lower rate of contamination and lower bacteria counts compared to other livestock meat products. This led them to believe that L. monocytogenes could be sterilized enough in the normal pasteurization process of liquid eggs. However, the French reports showed the L. monocytogenes present in liquid eggs even after the sterilization process, demonstrating there is definitely a chance of L. monocytogenes surviving and contaminating eggs, even after pasteurization.
“Because of this we decided to research the growth rate of L. monocytogenes in liquid eggs. We built a growth model based on L. monocytogenes counts in varying storage temperatures and times in liquid eggs so that we could understand under what conditions the risk of L. monocytogenes growth would increase,” explained Okochi.
“I had always wanted to create a growth model of food bacteria, but this was a bit much for one researcher to tackle. Growth modeling in the risk analysis of food microbiology is an important tool – and the research results are important for the entire industry,” he said. For help with the task, Okochi turned to Tokyo University’s Research Center for Food Safety, one of the few universities working in this field, to establish a joint research project.
It is normal that the speed and rate of bacteria growth will vary, even when applying the same bacteria to the same food sample. The growth parameters must be well understood if researchers are to make use of data in actual food safety. The challenge is that it is difficult to produce enough bacteria to accurately find average values, standard deviations and other statistically significantly values, but by using Palisade’s @RISK software, researchers can use a Monte Carlo simulation of the growth model coefficient to estimate the growth parameters. @RISK, an add-in to Microsoft Excel, uses Monte Carlo simulation to examine thousands of different possible scenarios in any quantitative model.
Okochi explained how they learned about @RISK: “We heard about the software from a university professor who was researching food safety. It is used in the risk assessment of food microbiology and becoming more widely used in Japan.” @RISK has a number of benefits, he said. It has been adopted in many industries, it uses industry standard Microsoft Excel, and the spreadsheet files can even be opened by computers without @RISK installed. In the food safety industry there are a number of users, and the availability of training in Japanese from Palisade is a great benefit. The @RISK software is also available in Japanese.
For this research, the experimental kinetic data were fitted to the Baranyi model, and growth parameters, such as maximum specific growth rate (µ(max)), maximum population density (N(max)), and lag time (?), were estimated using a Monte Carlo simulation with @RISK. As a result of estimating these parameters, the researchers found that L. monocytogenes can grow without spoilage below 12.2°C, so they then focused on storage temperatures below 12.2°C. Predictive simulations under both constant and fluctuating temperature conditions demonstrated a high accuracy, and with this joint research model they were able to predict growth of L. monocytogenes in pasteurized liquid eggs under refrigeration. The results of this research provide crucial information to the food safety industry.
According to the researchers, the advantages of @RISK include the capability to run simulations even when the case is mathematically difficult. It is also easy to understand the simulation results, since researchers can visually verify the results using charts and graphs, making it extremely powerful and useful.
Professor Katsuaki Sugiura, professor and researcher at the Laboratory of Global Animal Resource Science at the Graduate School of Agriculture and Life Sciences, part of the University of Tokyo, praised the joint research as well as @RISK.
“Through Mr. Okochi’s research, we have been able to attain valuable results in a research field with little precedence, based on the development of a growth model of the L. monocytogenes bacteria in liquid eggs. I believe that @RISK will play a valuable role in the field of microbiology forecasting. As @RISK is used in fields besides food safety, from 2013 I have used it as one of our information processing tools with my students for risk analysis in other fields. The real benefit of @RISK is that it is an add-in for Microsoft Excel, so you do not need any complex programming language. I look forward to seeing more and more research results in various fields.”
With the advent of corporate farming, large agricultural companies have begun to apply the same kinds of risk analysis techniques to the powerful uncertainties of the natural world and market prices that their counterparts in the manufacturing and finance sectors use. And Prof. Ariadna Berger, professor of Farm Management at the University of Buenos Aires, has gone so far as to introduce a portfolio approach to assessing and balancing the risks and opportunities of large cash cropping operations in Argentina. She began using @RISK when she was a graduate student at Cornell University and reports that it is the perfect tool to show her clients how to manage the risks in their large-scale farming operations.
In her consulting for farming corporations, Prof. Berger creates portfolios analogous to the portfolios of stocks and bonds used by financial analysts. Just like investment portfolios, the idea is to spread risk via diversification, except that in agricultural portfolios she is creating a mix of climate regions, soils, crops, and cultivation practices. By planting different crops, both market and yield risks are reduced; by planting in different regions and with different cultivation practices, yield risk is reduced even more. Like so many of her counterparts in finance, Prof. Berger uses @RISK to simulate risks and rewards throughout her clients’ potential agricultural “holdings” and to help her clients compare those possible outcomes with potential results from other possible portfolios with different diversification schemes.
Farming operations are conditioned by extreme fluctuations of weather and market price. Fortunately, Argentina has a number of climate zones, and farmers can choose from a number of crops to plant: mainly wheat, soybeans, corn or sunflowers. This situation allows farmers to spread the risk strategically.
Professor Ariadna Berger
Farm Management, University of Buenos Aires
According to Prof. Berger, the big challenge in creating her simulations is that, due to continually changing agricultural practices, it has been difficult to get data based on similar practices and technology over a long enough period of time to generate distributions. The way she compensates for the lack of historical data is by using agronomic simulation models to generate yield data to enter as @RISK distributions. These models are based on approximately 30 years of data on soil, water, nutrients, plant variety, and planting methods, and generate simulated yields that can be used to create a distribution. However, agronomic simulation models predict yields based only on restrictions for water and nutrients, while in the field there are other factors reducing yields, such as hail, frost, pests or diseases. Prof. Berger accommodates this limitation by complementing the yield data generated by the agronomic simulation models with other distributions for weather and weather-related random variables, which are based on historical data. In this way, yields simulated in each @RISK iteration are a combination of the distribution generated by the agronomic simulation models and of other random variables that may affect yield.
Finally, she rolls land and crop costs into this complex mix, and @RISK suggests answers to such questions as: How much land to assign to each crop? How much land to lease? And how many climate zones to include in the portfolio? This information is crucial to her clients because farming has always been a chancy business, and large-scale farming involves large-scale downturns and upswings with large-scale investments at stake. Investment in crop costs and land rent depends on crops and regions, but on average it can be estimated around US $125 per hectare. Some portfolios can be as big as 100,000 hectares, and more.
Because of the unpredictable nature of agriculture, serious planning can only happen if yield and market risks are considered. “Decisions made exclusively on the grounds of expected returns can be misleading and generate an unbearable risk exposure,” says Prof. Berger. “By accounting for a nearly limitless variety of uncertainties in our work, @RISK adds tremendous value. It easily integrates in the Excel programs with which we evaluate portfolios and helps us to make far better decisions on how to structure our agricultural portfolios.”
Integrated Multi-Trophic Aquaculture (IMTA) refers to managed, multi-species systems which recycle the byproducts of one aquatic species as nutritional inputs for another. These systems aim to improve environmental management, and increase harvest value through product diversification and recycling of nutrients. However, for these benefits to be realized, all the interacting components must be understood and carefully optimized. Dr. Gregor Reid, a senior research scientist with the University of New Brunswick, has used @RISK to determine the ideal ratios of kelps to absorb fish waste matter from a salmon IMTA system, as a means to better help aquaculturists improve the efficiency of seafood farming operations. A paper detailing this work appeared in the journal Aquaculture in May of 2013.
Aquaculture has become a booming industry in many regions of the world. According to the World Bank, nearly two-thirds of the seafood we consume will be farm-raised in 2030. However, as with all forms of intensive food production, commercial aquaculture is not immune to the potential for environmental impacts. Modern intensive fish farms produce large amounts of nutrient waste, both as solid-organic (feces and waste feed), and dissolved (carbon dioxide, ammonia and phosphate). If a farm is poorly located with limited flushing, or nutrient loading exceeds environmental assimilative capacity, algal blooms (i.e. eutrophication) and low oxygen may result, impacting the localized environment. Since these nutrient forms are the same as natural inputs for shellfish that filter organic particulates, and for kelps that absorb soluble inorganic nutrients, this provides an ideal opportunity to mitigate fish waste, while augmenting nutritional inputs for other marketable culture species.
While the principle of growing multiple aquatic species in close contact has been done informally for thousands of years, “We’re now on a much bigger scale,” says Reid. “We’re now trying to apply the historical benefits of polyculture [the simultaneous cultivation of different species] to the typical large monocultures seen in aquaculture today, by connecting adjacent culture groups by nutrient transfer through water.”
The system Reid works on, located in the Quoddy region of the Bay of Fundy, Canada, involves farmed Atlantic salmon and two different species of kelp: winged kelp (Alaria esculena) and sugar kelp (Sacharina latissimi). Salmon are a valuable seafood species, while kelps are used in a broad array of consumer products. If grown in close proximity and in the right ratios, these two species can benefit each other, along with the environment. This extra nutrient availability can increase kelp growth, resulting in a more profitable harvest. In turn, with enough kelp, oxygen levels can be boosted and nutrient absorption can reduce the impact of fish waste on the surrounding aquatic environment.
However, finding the appropriate ratios of salmon to kelp to achieve these nutrient transfer benefits is tricky, thereby making the benefits of large-scale commercial IMTA operations difficult to execute. While researchers have previously attempted to determine appropriate ratios, “past recommendations have been in algae per square meter, which doesn’t really mean much to someone without a lot of experience in this field,” says Reid. His goal was to create a straightforward measure easily understandable for application to an IMTA operation. To do this, he created a ratio model that reports the weight ratio of harvested seaweeds required to sequester an equivalent soluble nutrient load per unit growth of fish. More specifically, for every x kg of salmon grown, y kg of kelp would need to be harvested, to remove the same amount of nutrients produced during the x kg growth of salmon.
Reid used a semi-stochastic approach for data that has high variability, such as nutrient loading estimates, in order to quantify uncertainty in the model.
Reid gathered inputs from three diverse sources: the commercial industry, academic literature, and field research. Rather than use single static values for the model inputs, Reid used stochastic distributions for several inputs with ranges of uncertainty, such as digestibility of salmon feed components (e.g. protein, fats, carbohydrates) and nutrient content of IMTA kelps. While known inputs or easily determined values, such as feed composition (listed on all feed bags) could be left as static inputs. Model inputs with sufficient empirical sample data were fit with the @RISK’s distribution fitting functions, testing 15 different theoretical distribution types, and using maximum likelihood estimators to rank the best fits.
Reid had to be careful when considering input data. “Often, you’re trying to fit a theoretical data distribution, but there are certain properties in a theoretical data distribution that won’t exist in real life,” he explains. “There are distributions that assume the tails go on forever, but that’s never going to happen. So you have to apply data filters on those to get more realistic results.”
Reid used @RISK to run a simulation of 10,000 iterations using these inputs. The results gave a weight ratio model, with ratio estimations for each key nutrient, as well as oxygen supply potential for both species of kelp (see graphs).
In order for a commercial-scale salmon farm to have all its fish waste fully absorbed by kelp the number of rafts of kelp would have to significantly outnumber the salmon pens, which is impractical for most North American operations where the space available for coastal fish-farming is highly limited and regulated. “It’s not going to be possible to do full dissolved nutrient recovery with seaweeds such as kelps in North America, unless the the number of fish being farmed on site is reduced to make room for other species,” says Reid. “It would be tough for commercial salmon-culture operators to do that, because presently there is a larger return for fish per unit space than other cultured species.”
However, Reid notes that “one-hundred percent nutrient sequestration doesn’t need to be the only successful endpoint in such systems. Removal of any nutrient portion has merit and this is presently occurring with kelp rafts deployed at the edges of some salmon farm lease areas.” He also says that smaller-scale farms may be in a better position to alter the balance of species groups, by raising a premium priced fish species, thereby enabling a financial return from smaller fish culture numbers.
Reid says that @RISK was instrumental in his study: “@RISK has been a great way to communicate uncertainty in complex systems, and its ease of use with Excel spreadsheets make it a highly intuitive and powerful tool.”