At the core of any simulation model is a set of probabilities. Are they the right probabilities? Are they actually representative of the uncertainties you are facing? And how do you know?
In this talk, Tim Nieman will discuss some of the pitfalls in assessing probabilities, especially the ever-present cognitive biases, and how to minimize (but not eliminate!) these issues. The webinar will include some fun polls to test and potentially illuminate your own biases.
The 2024 Lumivero Virtual Conference on Sept. 25-26 brought together a diverse global community of data professionals, researchers, academics, and students for sessions featuring data intelligence, research trends, data analysis, risk management, and student success. With more than 6,200 registrations from 161 countries, the event highlighted Lumivero's ever-growing impact across industries such as education, social sciences, and public health.
Conference Highlights:
Missed it? You can still catch the sessions on demand! In this article, we’ll highlight some of the key sessions and impactful themes from the event to help you get started.
The conference focused on key themes addressing the evolving needs of researchers, data analysts, and professionals. Sessions covered practical strategies, the role of artificial intelligence (AI), innovative approaches to research and data management, and more.
These themes not only addressed the pressing needs of today’s professionals, but also provided valuable tools and strategies to help attendees stay ahead in their respective fields.
The 2024 Lumivero Virtual Conference featured dynamic keynote sessions led by thought leaders at the forefront of research and data analysis. These sessions offered deep insights into the latest trends, challenges, and opportunities in the industry, making them must-watch experiences for all!
Missed it live? All sessions are available on demand! Expand your skills, stay ahead of trends, and explore new strategies to make data-driven decisions.
Discover the latest version of XLSTAT, specifically designed to elevate data visualization, market research, and consumer insights. Join this interactive webinar to explore how these powerful tools can transform your business or research—and see them in action through a live demonstration!
There are many different types of waste in manufacturing – waste that can cost the economy many billions of dollars per year. For example, a 2022 McKinsey report on food loss (food wasted during harvest and processing) estimated a global cost of $600 billion per year for growers and manufacturers. Unplanned downtime due to breakdowns of production equipment is another type of waste, and a 2023 analysis of the cost of downtime by Siemens (p. 2) estimates that this wasted time costs Fortune Global 500 companies 11% of their annual turnover.
Management experts have tried to solve the problem of waste in manufacturing for generations. Today, many organizations have adopted Lean Six Sigma, a popular managerial methodology that helps improve processes, reduce waste, and ensure the quality of products.
In this article, you'll gain clear definitions of Lean and Six Sigma, a deeper understanding of the principles of Lean Six Sigma, and details on the Lean Six Sigma certifications available to practitioners.
First, let’s define Lean Six Sigma. As mentioned above, Lean Six Sigma is a management methodology that aims to streamline operations, boost efficiency, and drive continuous improvement. While it has its roots in manufacturing, Lean Six Sigma has also been adopted by other industry sectors including finance and technology.
Lean Six Sigma originates from two separate methodologies, Lean and Six Sigma. Both these methodologies have their own rich histories.
Lean principles have their roots in the automotive manufacturing sector. According to an article by the Lean Enterprise Institute, Lean principles emerged from the Toyota Production System (TPS), which was developed in Japan after WWII.
Taiichi Ohno, a production expert and Executive Vice President at Toyota, is considered the father of TPS. According to his entry in the Encyclopedia Britannica, Ohno developed a production system he called “just-in-time” manufacturing. The Toyota Europe website describes the just-in-time approach as “making only what is needed, when it is needed, and in the quantity needed, at every stage of production.”
When the TPS began to be studied and implemented in the United States, it evolved into Lean manufacturing. “Lean” was coined by then-MIT researcher John Krafcik, and defined in the 1996 book Lean Thinking by the researchers James Womack and Daniel Jones. In the introduction to their book, Womack and Jones describe Lean as a methodology which “provides a way to specify value, line up value-creating actions in the best sequence, conduct these activities without interruption whenever someone requests them, and perform them more and more effectively.” (p. 6) Lean principles have since moved beyond industrial production to construction, technology, and other industries.
According to an article by Six Sigma education provider Six Sigma Online, Six Sigma is a data-driven method developed by engineers at Motorola in the 1980s to reduce defects in manufacturing processes. The term “Six Sigma” refers to a process that produces “no more than 3.4 defects per million opportunities, which equates to six standard deviations (sigma) between the process mean and the nearest specification limit.”
Six Sigma spread to other businesses, achieving mainstream popularity when Jack Welch, then-CEO of General Electric, embraced it as a key part of GE's business strategy in the 1990s. In 2011, it was formally standardized by the International Standards Organization.
In the early 2000s, organizations realized that combining Lean’s focus on waste reduction with Six Sigma’s focus on process improvement through data-driven techniques could create a powerful, complementary approach to process optimization. Lean Six Sigma was born as a hybrid methodology focused on both eliminating waste (Lean) and reducing defects and variation (Six Sigma). Or, as Momal put it in his webinar on Monte Carlo Simulation, “when we're talking Six Sigma, we mainly talk about quality, and when we're talking lean, we mainly talk about speed.”
The methodology of Lean Six Sigma revolves around key principles drawn from both foundations. These principles guide how businesses can identify problems, find solutions, and sustain improvements. In an extract from the book Lean Six Sigma for Leaders published on the Chartered Quality Institute’s website, authors Martin Brenig-Jones and Jo Dowdall list these principles:
Focus on the Customer
Lean Six Sigma begins with ensuring that the organization understands the customer’s needs and expectations, then aligns processes to meet those requirements. This means eliminating activities that do not directly contribute to customer satisfaction.
Identify and Understand the Process
Before improving any process, it's essential to understand how it works. Lean Six Sigma uses tools like process mapping to visualize workflows and identify bottlenecks or unnecessary steps. The aim is to achieve a smooth, consistent process that maximizes efficiency.
“Manage by Fact” to Reduce Variation and Defects
Six Sigma emphasizes reducing variation within processes, ensuring that outcomes are consistent and predictable. This principle is based on data analysis and statistical tools that help identify the root causes of defects or inefficiencies. By reducing variation, companies can deliver products or services that meet quality standards with minimal defects.
Eliminate Waste
Lean principles focus on identifying and eliminating different types of waste within a process. Waste can be anything that doesn’t add value to the final product, such as excess inventory, waiting time, unnecessary movement, or overproduction. The goal is to streamline processes, minimize resource usage, and increase value-added activities.
There are seven types of waste Lean aims to eliminate. These were originally identified during the development of the TPS. Toyota describes them in a 2013 article about the TPS as:
Empower Teams and Foster Collaboration
Lean Six Sigma emphasizes teamwork and empowering employees to contribute to process improvements. Employees are trained in Lean Six Sigma tools, creating a culture of continuous improvement.
Continuous Improvement (Kaizen)
Both Lean and Six Sigma emphasize kaizen, a Japanese term meaning “continuous improvement.” The Kaizen Institute explains that this principle also originated from the TPS. Kaizen involves regularly assessing processes to make incremental improvements.
Data-Driven Decision Making
One of the core elements of Six Sigma is its reliance on data to make decisions. Lean Six Sigma practitioners use data to understand the current state of processes, measure performance, and determine whether improvements have been successful.
Practitioners can pursue certifications in Lean Six Sigma to demonstrate their ability to apply the principles to projects and processes. These certifications are described as “belts,” and follow a color system similar to that found in many East Asian martial arts. An article from the consultancy Process Management International lists the belt certifications from newest practitioner to most experienced starting with the white belt to the final master Six Sigma black belt:
Now that you’ve explored the fundamentals of Lean Six Sigma, you’re ready to discover how powerful risk analysis tools like @RISK can further enhance project outcomes.
Check out the next article, Using @RISK to Support Lean Six Sigma for Project Success, where we’ll showcase real-world examples from François Momal’s webinar series, demonstrating how organizations apply Monte Carlo simulation in @RISK to successfully implement Lean Six Sigma.
Ready to get started now? Request a demo of @RISK.
In our previous article, Introduction to Lean Six Sigma, we discussed the fundamentals of Lean Six Sigma, exploring how it combines the principles of lean manufacturing and Six Sigma to drive process improvement and operational excellence.
Now, we’re taking the next step by diving into how risk analysis software, specifically with Lumivero’s @RISK software, can enhance Lean Six Sigma initiatives. This post will focus on how Monte Carlo simulation can empower organizations to predict, manage, and mitigate risks, ensuring the success of Lean Six Sigma projects by drawing from insights shared in François Momal’s webinar series, “Monte Carlo Simulation: A Powerful Tool for Lean Six Sigma” and “Stochastic Optimization for Six Sigma.”
Together, we’ll explore real-world examples of how simulation can optimize production rates, reduce waste, and foster data-driven decision-making for sustainable improvements.
Monte Carlo simulation, as a reminder, is a statistical modeling method that involves making thousands of simulations of a process using random variables to determine the most probable outcomes.
The first model Momal presented involved predicting the lead time for a manufacturing process. He described the question this model could answer as, “when I give a fixed value for my process performance to an internal or external customer, what is the associated risk I take?”
Using data on lead time for each step of a six-step production process, @RISK ran thousands of simulations to determine a probable range for the lead time. It produced three outputs:\
Example 1: Probable lead time as seen by the customer showing two output graphics: output histogram for risk evaluation and sensitivity analysis showing what the main levers are.
The left-hand chart shows the probability distribution curve for the lead time which allows the production manager to give their customer an estimate for lead time based on probability. The other two charts help identify which steps to prioritize for improvement. The upper right-hand chart shows which of the six steps contribute most to variation in time, while the lower-right hand chart describes how changes to the different steps could improve that time.
@RISK also allows production managers to set probability-based improvement targets for each step of the process using the Goal Seek function.
Example 1: Goal Seek for step 1. Example of an industrial assembly process.
As mentioned above, the “Lean” aspect of Lean Six Sigma often refers to speed or efficiency of production. Lean production relies on being able to accurately measure and predict the hourly production rates of an assembly line.
Momal’s second example was a model which compared two methods of finding estimated production rates for a five-step manufacturing process: a box plot containing 30 time measurements for each step, and a Monte Carlo simulation based on the same data.
Example 2: Computation of the true hourly production rate (parts per hour).
Both the box plot and the Monte Carlo simulation accounted for the fact that step two of the production process was often slower than the others – a bottleneck. However, the box plot only calculated the mean value of the time measurements, arriving at a production rate of approximately 147 units per hour. This calculation did not account for variability within the process.
Using @RISK to apply Monte Carlo simulation to the model accounts for this variance. The resulting histogram shows that the assembly line only achieves a production rate of 147 units per hour in 37.2% of simulations.
Example 2: True production rate risk assessment.
A plant manager trying to achieve 147 units per hour will be very frustrated, given that there is a 62.8% chance the assembly line will not be able to meet that target. A better estimate for the engineers to give the plant manager would be 121.5 units per hour – the production line drops below this rate in only 10% of simulations:
Example 2: True production rate risk assessment, accepting a 10% risk.
Furthermore, with the Monte Carlo simulation, engineers working to optimize the assembly line have a better idea of exactly how much of a bottleneck step two of the process causes, and what performance targets to aim for to reduce its impact on the rest of the process. “The whole point with a Monte Carlo simulation,” explained Momal, “is the robustness of the figure you are going to give.”
From Lean modeling, Momal moved on to Six Sigma. Monte Carlo simulation can be applied to tolerancing problems – understanding how far a component or process can deviate from its standard measurements and still result in a finished product that meets quality standards while generating a minimum of scrap.
Momal used the example of a piston and cylinder assembly. The piston has five components and the cylinder has two. Based on past manufacturing data, which component is most likely to fall outside standard measurements to the point where the entire assembly has to be scrapped? A Monte Carlo simulation and sensitivity analysis completed with @RISK can help answer this question.
Example 3: Tolerancing of an assembled product, showing a cost-based stack tolerance analysis chart.
In this tolerance analysis, the assembly gap (cell C6) must have a positive value for the product to fall within the acceptable quality range. Using a fixed quality specification, it’s possible to run a Monte Carlo simulation that gives the probability of production meeting the specified assembly gap given certain variables.
Example 3: Tolerancing of an assembled product, showing sensitivity analysis.
Then, using the sensitivity analysis, engineers can gauge which component contributes the most variation to the assembly gap. The tornado graph on the right clearly shows the cylinder wall is the culprit and should be the focus for improving the quality of this product.
Stochastic optimization refers to a range of statistical tools that can be used to model situations which involve probable input data rather than fixed input data. Momal gave the example of the traveling salesman problem: suppose you must plan a route for a salesman through five cities. You need the route to be the minimum possible distance that passes through each city only once.
If you know the fixed values of the distances between the various cities, you don’t need to use stochastic optimization. If you’re not certain of the distances between cities, however, and you just have probable ranges for those distances (e.g., due to road traffic, etc.), you’ll need to use a stochastic optimization method since the input values for the variables you need to make your decision aren’t fixed.
Stochastic optimization: A double nested loop with decision variables (also named optimization variables).
Within the stochastic optimization simulation, a Monte Carlo simulation is completed for each variable within the “inner loop,” and then another Monte Carlo simulation is run across all the variables using different values (the “outer loop”).
For Lean Six Sigma organizations, stochastic optimization can support better project planning. Momal’s first model showed how to run a stochastic optimization to determine which project completion order would maximize the economic value add (EVA) of projects while minimizing time and labor costs so they remain within budget.
Example 4 Choice of Six Sigma projects to be launched in priority.
To use @RISK Optimizer, users must define their decision variables. In this model, Momal decided on simple binary decision variables. A “1” means the project is completed; a “0” means it isn’t. Users must also define any constraints. Solutions found by the simulation which don’t fit within both constraints are rejected.
Example 3: Choice of Six Sigma projects, showing optimization parameters and solutions that don’t meet the two constraints.
The optimization was run with the goal of maximizing the EVA. With @RISK, it’s possible to watch the optimization running trials in real time in the progress screen. Once users see the optimization reaching a long plateau, it’s generally a good time to stop the simulation.
Example 4: Choice of Six Sigma projects showing optimization run: total EVA stepwise maximization.
In this instance, the stochastic optimization ran 101 trials and found that only 61 were valid (that is, met both of the constraints). The best trial came out at a maximum EVA of approximately $9,100,000. The projects selection spreadsheet showed the winning combination of projects:
Example 4: Choice of Six Sigma projects optimization results.
Of the eight candidate projects involved, @RISK says that projects 2, 4, and 7 will meet the budget cost and labor time constraints while maximizing the EVA.
Next, Momal showed how stochastic optimization can be applied to design problems – specifically within the Design for Six Sigma (DFSS) methodology. DFSS is an approach to product or process design within Lean Six Sigma. According to a 2024 Villanova University article, the goal of DFSS is to “streamline processes and produce the best products or services with the least amount of defects.”
DFSS follows a set of best practices for designing components to specific standards. These best practices have their own terminology which Six Sigma practitioners must learn, but Momal’s model can be understood without them.
The goal of this demonstration was to design a pump that minimizes manufacturing defects and cost per unit.
Example 5: Pump DFSS design.
The model used to set up the stochastic optimization included a set of quality tolerances for the flow rate of the pump – this is what is known in DFSS as the “critical to quality” (CTQ) value – the variable that is most important to the customer. Decision variables included motor and backflow component costs from different suppliers as well as the piston radius and stroke rate. The goal was to minimize the unit cost and the quality level of the pump while meeting the flow rate tolerances.
Example 5: Pump DFSS design showing decision variables and tolerances.
As with the previous model, Monal demonstrated how to define the variables and constraints for this model in @RISK.
Example 5: Pump DFSS design, answering the question of “how can we guarantee a certain level of quality?”.
Then, when Monal ran the simulation, he again watched the live progress screen within @RISK to see when a plateau was reached in the results. He stopped the simulation after 1,000 trials.
Example 5: Pump DFSS design, showing stochastic optimization monitoring.
The simulation showed that trial #991 had the best result, combining the lowest cost while meeting the CTQ tolerances. Finally, @RISK updated the initial stochastic optimization screen to show the best options for supplier components.
Experiments are necessary in manufacturing, but they are expensive. Six Sigma methodology includes best practices for design of experiments (DOE) that aims at minimizing the cost of experiments while maximizing the amount of information that can be gleaned from them. Monal’s final model used XLSTAT to help design experiments which would solve an issue with a mold injection process that was causing too many defects – the length of the part created should have been 63 mm.
The approach involved running a DOE calculation in XLSTAT followed by a stochastic optimization in @RISK. There were three known variables in the injection molding process: the temperature of the mold, the number of seconds the injection molding took (cycle time), and the holding pressure. He also identified two levels for each variable: an upper level and a lower level.
Example 6: Monte Carlo simulation – DOE coupling.
Six Sigma DOE best practices determine the number of prototype-runs an experiment should attempt by taking the number of levels for the variables, then raising them to the power of the overall number of variables, and finally multiplying that value by five. In this instance, 23 is equal to 8, and 8 x 5 is 40. There should be 40 real prototypes generated. These were modeled with XLSTAT DOE. The “response 1” value shows the length of the part created.
Example 6: Coupling between Monte Carlo and DOE.
XLSTAT then generated a list of solutions – combinations of the three variables that would result in the desired part length. The row circled in red had the lowest cycle time. It also created a formula for finding these best-fit solutions.
Example 6: Coupling between Monte Carlo and DOE.
These were all possible solutions, but were they robust solutions? That is, would a small variation in the input variables result in a tolerably small change in the length of the part created by the injection molding process, or would variations lead to unacceptable parts?
For this second part of the process, Momal went back to @RISK Optimizer. He defined his variables and his constraint (in this case, a part length of 63). He used the transfer function generated by the XLSTAT DOE run.
Simulation using the results of a DOE.
Next, he specified that any trials which resulted in a variation of more than three standard deviations (three sigma) in the variables or the length of the part should be rejected.
@RISK optimization model set up.
Then he ran the stochastic optimization simulations and watched the outputs in real time.
RISKOptimizer Watcher of all Trials (ongoing optimization).
He stopped the trials once a plateau emerged. @RISK Optimizer automatically placed the values from the best trial into his initial workbook.
Best solution given by RISKOptimizer.
Sensitivity analysis, this time using a Pareto chart instead of a tornado graph, showed that the primary factor driving variance in trial results was the hold pressure:
Pareto chart examples including the contribution of the variables.
This gave him experimental data that could be used to inform the manufacturing process without the cost of having to run real-world experiments.
Data-driven manufacturing processes that lead to better efficiency, less waste, and fewer defects – that’s the power of the Lean Six Sigma approach. With @RISK and XLSTAT, you gain a robust suite of tools for helping you make decisions that align with Lean Six Sigma principles.
From better estimates of production line rates to designing experiments to solve manufacturing defect problems, the Monte Carlo simulation and stochastic optimization functions available within @RISK and XLSTAT can support your efforts toward continuous improvement.
Ready to find out what else is possible with @RISK? Request a demo today.
Getting started with Lean Six Sigma might feel challenging, but with ready-made @RISK example models available for download, you can quickly explore the power of Six Sigma – all in Microsoft Excel.
These models can help you test concepts, run simulations, and analyze potential improvements to your methods using @RISK software – offering hands-on experience without starting from scratch.
1. Six Sigma Functions:
A list of @RISK's six sigma functions – what they mean and how they work.
2. Six Sigma DMAIC Failure Rate Risk Model:
Predicts failure rates using RiskTheo functions and defines key quality metrics like LSL, USL, and targets for each component.
3. Six Sigma DOE with Weld:
Demonstrates DOE principles in welding, using @RISK’s functions to optimize process quality.
4. Six Sigma DOE with Catapult:
Illustrates Six Sigma optimization through a catapult-building exercise using Monte Carlo simulation.
5. Six Sigma DMAIC Failure Rate Model:
Calculates defect rates by monitoring product components against predefined tolerance limits.
6. Six Sigma DMAIC Yield Analysis:
Pinpoints production stages with the highest defect rates and calculates process capability metrics for improvement.
Download these models today to quickly explore Six Sigma principles in action with @RISK!
Learn how to apply the six stages of Reflexive Thematic Analysis (Braun and Clarke, 2006, 2020) using NVivo 15 with Lumivero AI Assistant as your data management tool. This webinar will show the practical application of one of the most popular data analysis methods used in qualitative data analysis globally.
See the sample project that comes with all copies of NVivo enacted through each of the six stages as set out in the guidelines from the two seminal authors in this domain. Learn about the many tools in NVivo that may be deployed during coding, retrieval, and reporting on your identified themes and the new NVivo 15 Lumivero AI Assistant.
Since its debut in 2022, OpenAI’s ChatGPT has sparked widespread adoption of generative artificial intelligence (AI) across various industries – from marketing and media to software development and healthcare. This transformative technology is now poised to elevate the field of qualitative data analysis and QDA software.
With the release of NVivo 15, we introduced the cutting-edge Lumivero AI Assistant to our powerful qualitative data analysis software (QDA software). Developed with input from our AI advisory board, the Lumivero AI Assistant offers researchers powerful tools for enhancing their qualitative analysis while maintaining researcher control, data security, and methodological transparency.
In a recent webinar, Dr. Silvana di Gregorio, Lumivero’s Product Research Director and Head of Qualitative Research, walked through how we developed the Lumivero AI Assistant for NVivo 15 and demonstrated how it works in practice for memoing and coding qualitative research data.
Watch the webinar or continue reading to learn more!
Dr. di Gregorio has been working with and on qualitative data analysis software since 1995. At the beginning of the webinar, she took time to remind attendees that qualitative research has always embraced new technology.
“[Qualitative research is] constantly evolving, and that evolution has been always intertwined with technology,” said Dr. di Gregorio.
However, Dr. di Gregorio also noted that qualitative research methodologists have often taken a cautious approach to incorporating new technologies into their practices. “There’s always been kind of a lag between influencers and technology,” she said.
Dr. di Gregorio cited the adoption of the tape recorder as one example of how new technology impacted research practices: prior to the wide availability of inexpensive tape-recording equipment, most data for qualitative analysis was drawn from notes, letters, diaries and other written material. Recording technology enabled the spoken word to be captured and opened a new world of conversational analysis that led to richer insights.
Over the last 30 years, QDA software has played a similar role by enabling researchers to analyze data source materials, including interviews, and develop code structures to describe the data that’s present in those materials. Most QDA software, including previous versions of NVivo, has also integrated early machine learning- or AI-based features such as speech-to-text transcription or sentiment analysis. In all these instances, new technology has been seen as a tool rather than a threat.
“We use tools to manage limitations of our brain power,” Dr. di Gregorio explained. “In relation to qualitative data and analysis, the problems we are trying to solve are how to manage and organize unstructured data or very rich, in-depth data . . . and how to find patterns in that data.”
Even though generative AI seems to have tremendous disruptive potential, Dr. di Gregorio described it as yet another addition to the researcher’s toolbox – not a replacement for qualitative researchers themselves.
However, just like a physical tool in a workshop, AI needs to be used responsibly.
AI tools need to be carefully integrated into research. A January 2024 article in BMC Medical Ethics about the ethical challenges of using AI in healthcare, for example, describes the need to look beyond “the allure of innovation” and ensure that the use of AI benefits all stakeholders.
Qualitative research, like healthcare, has ethical standards that need to be maintained. Incorporating AI in qualitative research carelessly could erode those standards. With this in mind, our team convened an AI Advisory Board to inform and guide the development of NVivo 15 with the Lumivero AI Assistant. Dr. di Gregorio described the diverse makeup of the board as including researchers at every career stage, from PhD candidates to seasoned academics, as well as members drawn from nonprofit and commercial organizations. “Everyone was totally engaged in this process,” explained Dr. di Gregorio.
Insights from the AI Advisory Board led to the development of three pillars guiding our team’s approach to AI. These include:
The AI Advisory Board’s insights also helped refine details of how the Lumivero AI Assistant functioned. For example, when summarizing text, the advisory board came to a consensus that summaries should use the third-person voice instead of the first-person voice. This would prevent the excerpts being mistaken for direct quotes.
The advisory board also decided that researchers should be able to control the automatic coding feature – they could choose whether to let the AI Assistant only suggest codes to the researcher with the researcher doing the actual coding or to allow the AI Assistant to do the coding as well.
Memoing Made Smarter for Better Qualitative Analysis
Dr. di Gregorio transitioned into a practical demonstration, showing how the Lumivero AI Assistant enhances memoing for qualitative researchers.
She began by revisiting the various types of memos used in qualitative research, referencing the work of Paul Mihas, a qualitative research expert (1) at the Odum Institute for Research in Social Science at the University of North Carolina at Chapel Hill. In the description of a memoing course Mihas taught for the ResearchTalk consultancy, Mihas emphasized that "memo-writing strategies help us develop abstract thinking, discern inscribed meaning between pieces of data, and assess collective evidence for emerging claims,” a concept central to the memoing process Dr. di Gregorio explored.
Dr. di Gregorio demonstrated how to revitalize the process of creating what Mihas calls “document reflection” memos using the Lumivero AI Assistant in NVivo 15. A document reflection memo, Dr. di Gregorio explained, “is when you're getting an initial understanding of a transcript or text and try to capture, at a high level, the takeaways – the pivotal moments of what's going on there.”
To illustrate the practical application of this approach, Dr. di Gregorio utilized real-world data from a past research project, offering a hands-on demonstration of how the Lumivero AI Assistant can be employed for document reflection memos.
For her demonstration, she selected data from a University of London mixed methods study conducted more than a decade ago which explored the differences in how 16- to 18-year-olds in Europe perceived community responsibility and political engagement. The study aimed to determine whether these opinions varied based on the post-compulsory secondary education pathways they pursued.
First, Dr. di Gregorio used the Lumivero AI Assistant to generate overall summaries of each transcript which were saved as memos linked to each transcript. Next, she went through the transcripts one at a time, using the Lumivero AI Assistant to summarize individual sections. These were saved as annotations within the project and were clearly labeled as having been AI-generated. Dr. di Gregorio was then able to quickly assign broad codes to each annotated section of the transcript based on the Lumivero AI Assistant’s suggestions.
Having completed a high-level summary of the transcript along with preliminary coding, she was then able to dig deeper into the data. Working through each annotation, she reviewed the section of the transcript from which it was generated. She was then able to add the annotations to the memos, drawing out deeper themes from what the interview subject was saying – what Mihas calls a “key quotation memo” – and adding selected quotes as well as her own thoughts on the developing analysis.
She then reviewed the broad codes she created and used Lumivero’s AI Assistant to suggest child codes (sub-codes) refining the analysis. She created a code memo for each code to review the code across all the transcripts.
The process is summarized in the figure below:
Dr. di Gregorio explained that while qualitative data analysis software of the past has always included memoing tools along with coding tools, the memoing features have typically been difficult to find or researchers jumped to coding immediately. NVivo 15 with Lumivero AI Assistant is designed to help bring memoing back into balance with coding and can be used with all approaches to qualitative methods such as thematic analysis, discourse analysis, narrative analysis and more!
Dr. di Gregorio also noted that NVivo 15’s Lumivero AI Assistant also supports researchers with additional features. These include:
Also, if the researcher feels the summary doesn’t accurately reflect the text they’ve highlighted, they can ask the Lumivero AI Assistant to re-summarize. With NVivo 15, the researcher is always in control.
Better memoing capabilities enable researchers to conduct richer reflexive analysis. The authors of “A Practical Guide to Reflexivity in Qualitative Research,” a 2023 article in the journal Medical Teacher describe reflexivity as the process “through which researchers self-consciously critique, appraise, and evaluate how their subjectivity and context influence the research processes.”
Dr. di Gregorio showed how researchers can use the Lumivero AI Assistant within NVivo 15 to create annotations and memos quickly. Within the memos, researchers can identify not just the themes of the data, but also how the data was gathered.
For example, researchers can create positional memos that notate how the social power dynamics between interviewer and interviewee or the circumstances of an interview might influence the conversation. These reflexive observations can then be included in the final research product – giving crucial context and transparency to audiences who will read and apply the research.
Finally, Dr. di Gregorio noted that researchers need to be transparent about how they use AI tools within qualitative research, being sure to emphasize that AI supports analysis rather than conducting it.
“When you're writing up any methodological section, whether it's a dissertation or whether it's an article for publication, [be] clear about the process of how you did it. NVivo doesn't do the analysis. You are still doing the analysis, but you're using [AI] as an aid,” said Dr. di Gregorio.
Ready to transform your workflow, gain deeper insights into your research question, and streamline your analysis? Don’t wait—request your free demo of NVivo 15 and the Lumivero AI Assistant and discover the next level of qualitative research innovation.
Mihas, Paul Memo Writing Strategies: Analyzing the Parts and the Whole in Vanover, C., Mihas, P, Saldana, J. (2022) Analyzing and Interpreting Qualitative Research: After the Interview, Sage Publications
Learn how DecisionTools Suite 8.7 can drastically reduce computation times, enabling faster decision-making and increased productivity.
During this webinar, we will showcase real-world use cases, comparing the performance of the new Evolver with the previous version and Excel's Solver. We'll also present the new Evolver interface which offers a more intuitive and efficient user experience.
Qualitative Medical Marketing Research can be conducted with a range of professionals: physicians, mid-level providers, pharmacists, nurses, social workers, and patients. How is qualitative research with these people different from research with consumers? How do the insights gleaned from patients differ from those of the very same people when they speak to their role as customers of goods?
This webinar focuses on the key differences that make qualitative research in the medical market field unique. Dr. Pamela Waite will cover essential techniques and insights to ensure impactful and meaningful research outcomes.
Join us as we dive into the world of data in our free virtual conference! Explore new techniques for qualitative, mixed methods, and statistical data analysis, learn best practices for ensuring student success in field experience outcomes, and hear how experts in your industry are leveraging Lumivero software to drive innovation and strategy.
Plus, with countless Lumivero software workshops and networking sessions, you’ll walk away with clear, actionable insights that let you take your research and data analysis to the next level straight away.
Research Writing Institute: Streamline Your Writing with AI
RATA (Rate-All-That-Apply) is a survey or sensory evaluation technique used to assess multiple attributes of a product, concept, or experience. It combines elements of both Check-All-That-Apply (CATA) and traditional rating scales to provide more detailed and nuanced feedback.
Presenting Attributes: Respondents are provided with a list of attributes (e.g., sweet, salty, smooth, fresh, etc.) related to the product or experience being evaluated.
Task for Respondents:
Data Collection: The data collected provides both a qualitative aspect (which attributes are applicable) and a quantitative aspect (how intense or prominent each selected attribute is).
Imagine a food company wants to evaluate a new flavor of yogurt. They might ask respondents to rate various attributes like "sweet," "creamy," "tangy," "thick," and "fruity." Several yogurts are presented to respondents.
Step 1: Respondents taste the yogurts and check all attributes that apply (e.g., "sweet," "creamy," "fruity").
Step 2: For each checked attribute, they rate the intensity.
If we asked you if you think you like the same chocolate as everyone else, would you say yes? The answer is no! Because everyone is different! However, we can create clusters of people with similar sensibilities within each cluster.
In a RATA task, which is a very popular sensory task, how do you build up these clusters?
Dr. Fabien Llobell invented an adaptation of CLUSCATA (a popular analysis method devoted to CATA data clustering) to RATA data. The development team coded that in XLSTAT, so now we can build clusters of respondents with RATA data!
Dr. Fabien Llobell presented this new tool at the 2024 Eurosense Conference and the 2024 Lumivero Conference with Professor Sara Jaeger.
RATA is a powerful tool for gathering detailed feedback on multiple attributes of a product or experience. By allowing respondents to both select relevant attributes and rate their intensity, RATA provides a rich dataset that can be used to guide product development, sensory analysis, and marketing strategies.
Get started with powerful RATA analysis in XLSTAT today!