By Silvana di Gregorio, PhD โ Product Research Director and Head of Qualitative Research, Lumivero
In the past five years, qualitative researchers have faced two major disruptions: the COVID-19 pandemic and the rise of generative AI. Both have fundamentally altered the way research is conducted, forcing academics and commercial researchers alike to adapt rapidly. As Lumiveroโs Head of Qualitative Research, I didnโt just observe these changesโI worked directly with researchers to ensure that AI was developed as a tool for support rather than a source of disruption.
In 2020, when the pandemic upended qualitative research, researchers were forced to re-imagine how they could do research when face-to-face interaction suddenly became impossible. A comparison of a survey I did in May 2020 with a follow-up survey in February 2023 showed how the long-term disruption shifted certain practices โ in particular, online focus groups and interviews. In 2020, only 6% of researchers in my survey said they planned to use those methods before the pandemic hit. In 2023, that number had more than doubled to 15%. The disruption forced researchers to experiment and, for some, opened new ways to do qualitative research.
Fast forward to 2023, and another wave of disruption was underway: generative AI. Tools like ChatGPT brought excitement and uncertainty in equal measure. Would AI replace researchers? Could it be trusted? At the Qualitative Research Consultants Association (QRCA) conference that year, I saw firsthand the range of emotionsโfrom curiosity to deep concern. But AI in qualitative research wasnโt new. NVivo had long incorporated machine learning-driven features such as natural language processing, sentiment analysis, and automated transcription. The difference now was the scale and visibility of AIโs capabilities which is driving the current wave of disruption.
Rather than rushing to implement generative AI features, Lumiveroโs product team took a step back. I led a large-scale survey in August 2023 with qualitative researchers, followed by in-depth interviews, to understand their perspectives on AI. The results revealed a broad spectrum of comfort levelsโsome eager to experiment, others highly sceptical. Recognizing the need for a measured approach, we established an AI Advisory Board comprising of academics, research officers, postgraduate students, and commercial consultants. Their role? To ensure that AI development in NVivo remained aligned with the needs and concerns of the research community.
The first stage of our work with the AI Advisory Board began in January 2024 when the product team presented three prototype AI features: document summarization, text summarization within documents, and code suggestion. After breaking into discussion groups, the Board provided detailed feedback that shaped how we refined these features.
This early feedback ensured that our AI tools aligned with real research needs, prioritizing usability and trust.
The second stage of our collaboration happened in February 2024 with a Hackathon, designed to expand our thinking beyond immediate product development. While our developers worked on refining the prototypes, I gathered researchers, developers, and product team members to brainstorm the future of AI in qualitative research.
During the Hackathon, participants worked in smaller breakout groups to generate and refine ideas, followed by plenary discussions to synthesize insights. Key themes emerged:
Following the Hackathon, I conducted a prioritization exercise in which participants allocated virtual budgets to different ideas. The highest priority? AI-powered visualization tools that support researchers in presenting their findings effectively.
By mid-2024, these insights culminated in an AI-powered assistant integrated into NVivo as a separate, flexible component rather than a fully embedded feature. This design choice allows the Lumivero AI Assistant to be used across all our software solutions. Later that year, the product team extended AI functionalities to our reference management software, Citavi, offering intelligent literature recommendations as well as AI summarization tools.
Looking ahead, we continue to refine our AI roadmap, incorporating feedback from ongoing research and user engagement. Our next steps include exploring the potential for AI-driven software innovations tailored specifically for qualitative research in 2025.
Disruption is about breaking the normal flow of work, but it also presents opportunities. Lumiveroโs goal isnโt just to follow AI trendsโitโs to shape them in a way that enhances qualitative research rather than undermines it. By taking a research-led approach, listening to the concerns of academics and commercial researchers alike, and co-developing AI solutions with those who use them, we are ensuring that AI becomes an asset, not a barrier.
The question isnโt whether AI will impact qualitative researchโit already has. The real challenge is making sure that impact is positive, ethical, and aligned with the core principles of rigorous, meaningful analysis. And thatโs precisely the challenge I and my colleagues at Lumivero are committed to solving.
The above article is based on the presentation Silvana di Gregorio, PhD, gave at the QRCA conference in Philadelphia on February 11-14, 2025.
Want to see NVivo AI in action?
Silvana di Gregorio, PhD, Product Research Director and Head of Qualitative Research at Lumivero
Silvana di Gregorio, PhD, is a sociologist and former academic with a PhD in Social Policy from the London School of Economics. She has been training, consulting, and publishing about qualitative data analysis software since 1995. For 16 years, she had her own training and consulting business, SdG Associates. She is author of, โVoice to Text: Automating Transcriptionโ in Vanover, C., Mihas, P., Saldana. J. (Eds.) Analyzing and Interpreting Qualitative Data: After the Interview, Sage Publications, and โUsing Web 2.0 tools for Qualitative Analysisโ in Hine, C. (Ed.) Virtual Research Methods. Volume 4, Sage Publications, and co-author with Judith Davidson, โQualitative Research Design for Software Users,โ Sage Publications, and โQualitative Research and Technology: In the Midst of a Revolutionโ in Denzin, N. and Lincoln, Y. (Eds.). Handbook of Qualitative Research (4th Edition), Thousand Oaks: Sage, and co-author with Linda Gilbert and Kristi Jackson, โTools for Qualitative Analysisโ in Spector, J.M., Merrill, M.D., Elen, J. (Eds.) Handbook of Research on Educational Communications and Technology. She is part of the Product Team at Lumivero.
Modeling the frequency and magnitude of future debris flows to determine the optimum hazard mitigation strategy. Communicating risk to clients by displaying the probability of event paths for three decisions:
Duncan Wyllie, a Principal of Wyllie & Norrish Rock Engineers, uses the Palisade software PrecisionTree for probabilistic modeling of debris flow protection measures.
When analyzing the optimum method of protecting an area at risk from debris flows, three decisions are compared โ accepting existing conditions, constructing a containment dam with sufficient capacity to contain future flows, or relocating residences on the debris flow runout area. Creating probabilistic decision trees in PrecisionTree allows uncertainties in the frequency and magnitude of future debris flows to be analyzed, and for comparison of costs between constructing a dam and relocating the residences.
Wyllie & Norrish Rock Engineers, with offices in Seattle and Vancouver, Canada, is a specialist engineering company working in the fields of landslides, tunnels, slopes, and foundations. Duncan Wyllie and Norman Norrish, the company principals, have a combined total of 80 years of experience in applied rock mechanics.
Since the 1990s, Wyllie and Norrish have been utilizing Palisade software to analyze natural hazards and select hazard mitigation procedures.
When a potential debris flow hazard is located above a residential development, PrecisionTree can be used to create a probabilistic decision tree that maps out possible scenarios, the likelihood they will occur, and the estimated damage costs. Three decisions are compared โ existing conditions, constructing a debris flow dam, or evacuating the debris flow runout area.
ss
Duncan Wyllie
Principal of Wyllie & Norrish Rock Engineer
With reference to the decision tree shown below, the components of the analysis are as follows:
For a closer look, download our free Debris Flow Containment Dam example model.
Analysis shows that the optimum decision is to construct a containment dam because the total cost of mitigation plus the expected cost (EV) of damage is lower for the dam construction (EVฮฃdam = $200,150) than for existing conditions (EVฮฃexisting = $360,000) or for relocating the houses (EVฮฃhouses = $2,000,600).
The use of PrecisionTree allows possible mitigation measures, along with the probability of event occurrence and cost, to be analyzed. The analysis unambiguously identifies the most cost-effective mitigation measure, and the decision process is clearly mapped out in the decision tree.
The use of @RISK and PrecisionTree software to prepare decision trees modeling all potential outcomes enables Wyllie & Norrish Rock Engineers to quantitatively determine the optimum protection strategy and easily communicate the findings.
With Palisadeโs products, Wyllie & Norrish Rock Engineers can:
By using probabilistic analysis, Wyllie & Norrish Rock Engineers ensure that the best decision is reached for each at-risk area and if necessary, effective debris flow dams are created to protect nearby structures.
Download our free example model, Decision Trees in Geotechnical Engineering, to explore three decision tree examples from the geotechnical engineering field: debris flow containment dam, rock slope stabilization, and gravity dam reinforcement anchors.