
Author: Silvana di Gregorio, PhD – Product Research Director and Head of Qualitative Research, Lumivero
Introduction
Artificial intelligence (AI) has been around for several decades, but the launch of ChatGPT in late 2022 marked a seismic shift in how we think about qualitative research. With its intuitive interface and freely available version, ChatGPT became widely accessible—lowering the barrier to entry and sparking experimentation in the research community.
This ease of access encouraged researchers to explore how AI could support qualitative work. At the same time, it raised concerns: misuse, ethical questions about how these tools are trained and how they handle uploaded content, and ongoing debates around reliability and trust.
Compounding these challenges is a widespread lack of understanding about what AI actually is, and how generative AI (GenAI) tools, such as ChatGPT, differ.
Artificial intelligence (AI) refers to systems performing tasks that typically require human intelligence (e.g., pattern recognition, language understanding). Machine learning (ML) is the AI subset that “learns” from data without explicit rules. Generative AI (GenAI) uses large neural networks to create new content—text, images, audio—based on those learned patterns. (Kühl et al., 2020; Sengar et al., 2024) For example, since 2014, NVivo has steadily adopted AI/ML features—autocode using existing coding patterns (ML), autocoding by theme, sentiment analysis, transcription (all AI), and, as of 2024, generative-AI summarization and suggest codes. Figure 1 shows the relationship between AI, ML and GenAI.

Throughout this article, I adopt an interpretivist perspective, treating GenAI not as an ‘objective’ coder but as a dialogic partner whose outputs must be interpreted through the researcher’s context and critical lens. This stance foregrounds reflexivity and emphasizes that meaning arises from the interaction between researcher, AI, and data.
Current landscape of AI in research
Between 2023 and 2024, leading AI models rapidly closed performance gaps on major benchmarks—e.g., tasks that were 4.4% solvable in 2023 jumped to 71.7% in 2024—demonstrating that AI can now reliably handle transcription, translation, and complex text analyses. (Hai_ai-Index-Report-2025_chapter2_final)
Another trend is the proliferation of AI models and democratization of access. Open-source or “open-weight” models are quickly catching up to proprietary systems. In early 2024, the top closed-source chatbot slightly outperformed the best open model; by February 2025, the performance gap was nearly negligible (about 1.7%). (Hai_ai-Index-Report-2025_chapter2_final)
For researchers, this means robust AI tools are not limited to Big Tech – open models (which can often be run locally for data privacy) are becoming viable for qualitative analysis tasks. Moreover, the cost of using powerful AI has plummeted. The AI Index notes that querying a model with GPT-3.5 level performance fell from about $20 per million tokens in late 2022 to just a few cents by mid-2024. (Hai_ai-Index-Report-2025_chapter1_final) In practical terms, coding a large dataset or running extensive text analyses with AI assistance is far more affordable than before.
This growth is accompanied by sheer volume: AI research outputs have exploded (three-fold growth since 2013) and model training demands keep rising. Qualitative scholars thus face both more powerful tools and new sustainability challenges. (Hai_ai-Index-Report-2025_chapter1_final) For qualitative researchers, this “landscape” snapshot means we’re operating amid ever-more capable tools – from text analyzers to multimedia content generators – but also facing new challenges of keeping research practices sustainable and ethically sound in the face of AI’s rapid expansion.
Qualitative data analysis software: GenAI feature timeline
All major QDA platforms (ATLAS.ti 25, MAXQDA 24/Tailwind, NVivo 15.2) now embed GenAI for summarization and code suggestions. ATLAS.ti and MAXQDA24/Tailwind offer conversational AI enquiry which encourages reflection but risks over-reliance if not used critically. ATLAS.ti features “intentional AI coding” (contextual prompts), though for some users, the number of suggestions might require adjustment. NVivo 15.2 focuses on transparency—subcodes are offered in digestible batches, with text excerpts shown for each code.
Taken together, these GenAI enhancements promise efficiency, but realizing their value demands that researchers critically vet all AI-generated suggestions within a broader interpretive framework.
AI and qualitative research: Emerging uses in academia
Amid this AI boom, pioneering qualitative researchers have begun experimenting with GenAI tools like ChatGPT and GPT-4 for data analysis. Christou (2023a, 2023b, 2025a, 2025b) has written a number of critical perspective articles on AI in qualitative analysis. He urges treating AI as an analytic partner where prompt refinement and vetting are essential to prevent biased or fictitious ‘findings’. In a grounded theory pilot study, Sinha et al. found GPT-4 helpful for uncovering overlooked codes, though the researchers cautioned that too much reliance on AI can undermine deep immersion in data.
Researchers bring their own disciplinary training and values to prompt design and interpretation. By keeping a reflexive journal that logs why certain AI-generated themes were adopted or rejected—documenting our own assumptions about what constitutes a ‘theme’—we preserve transparency and ensure that the final analytic narrative reflects human judgment, not just algorithmic patterns.
Morgan’s early ChatGPT study (2023) revealed AI’s strength in descriptive auto-coding, but by (2025) he advocated a query-based approach to bypass line-by-line coding, positioning AI as a dialogic tool rather than a mechanistic coder. Hayes (2025) utilizes a hybrid approach combining thematic coding through GPT-4 and Claude 3.5 with iterative dialogic prompts to deepen interpretative insights while cautiously evaluating outputs.
Nguyen-Trung and Nguyen (2025) propose a Narrative-Integrated Thematic Analysis (NITA) blending high-level theme identification with AI assisted narrative profiles. NITA emphasizes evolving narratives over discrete coding, creating a synthesis that balances traditional analysis with new AI capabilities. Such approaches hint at a paradigm shift: from coding text to conversing with text via an AI intermediary.
Friese (2025) has no hesitation in claiming that coding “may now be effectively replaced by AI-supported data retrieval combined with dialogue-based interpretation”. She has outlined a new method – Conversational analysis to the power of AI (CAAI). She envisions a five-stage workflow—begin familiarization of the data with AI-generated summaries, scaffold analysis with researcher-crafted question sets, engage in focused AI dialogue (4–6 interviews at a time), synthesize dialogic insights, and elevate analysis by testing theoretical hypotheses with the AI. In each case, the human researcher remains central, refining AI outputs to preserve nuance and context.
Conversational Analysis with AI (CAAI) transforms qualitative research by prioritizing interpretation over categorization, positioning researchers as facilitators of analytic dialogue rather than mere coders. (Friese, 2025)
Figure 2 shows how I used the summarization feature in NVivo 15 to initially familiarize myself with some interviews. The summaries in this memo are transparently labelled AI. In bold are issues I identified as interesting and needing to follow-up. My initial views are labelled with my initials. See Figure 2.

AI and qualitative research: Emerging uses in industry
So far, this discussion has focused on the use of Generative AI in supporting the analysis of unstructured text. Heyde et al. (2025) report in their systematic review that LLM usage in survey research exploded from 1.6% in 2023 to over 59% in 2024, especially for instrument development, synthetic respondents, and text classification. As Figure 3 illustrates, AI-driven survey design and silicon respondent pilots are now routine in many industry projects.

A new generation of GenAI platforms—e.g., CoLoop, Flowr, Qualzy, Qualzai, CloudResearch’s Engage, BoltChatAI—automates tasks from generating survey questions and interview guides to deploying synthetic respondents for pilot testing and AI-driven interviewing; some even produce complete analytic reports with graphics and personas, enabling qualitative inquiry at unprecedented scale.
In industry, market and UX researchers have been quicker to adopt GenAI for focus-group analysis, social-listening, and customer feedback (The Market Research Society, 2025). While risk tolerance can be higher under tight deadlines, professional bodies still stress data privacy and bias mitigation. Academics, in turn, are borrowing audit and trustworthiness techniques from market research—e.g., employing third-party auditors—while industry adopts member-checking and transparency norms from academia. Ultimately, academic and industry contexts converge around responsible AI use: theory-driven rigor meets efficiency-driven pragmatism.
Practical implications
As GenAI shifts code-generation and analytic narratives toward automation, the researcher’s role pivots to curation, contextual verification, and ethical oversight (Christou, 2025b; D. L. Morgan, 2023). New workflows might let AI perform a first-pass coding, followed by human review—annotating codes with theoretical notes, checking for bias, and refining themes. In practice, this frees researchers from mechanical tasks but demands vigilance: any AI-curated code must be traced back to raw data, ensuring the human voice remains the ultimate arbiter of meaning.
Institutional and publisher policies
The rapid rise of GenAI has prompted universities and publishers to issue guidelines to ensure these tools are used responsibly in research.
An et al. (2025) find that while 94% of top U.S. universities issue AI guidelines for teaching, fewer than 20% address researcher use—leaving scholars largely responsible for navigating privacy, data protection, and ethical use. Ganguly et al. (2025) echo this: R1 institutions typically allow GenAI but place the onus on researchers to learn evolving regulations, document AI usage, and disclose it in publications. Figure 4 (Ganguly et al., 2025) outlines considerations—legal risks, data safeguards, transparency. Meanwhile, Ganjavi et al. (2024) note that major journals permit AI-assisted text but uniformly forbid listing AI as an author. Authors must take full responsibility for any AI-derived content and include a brief “AI disclosure” in their method or acknowledgments sections (e.g., Nature journals’ policies).

Transparency is emerging as a key theme: universities emphasize that using GenAI is acceptable, but hiding its use is not. “From a researcher perspective, we found that as currently framed, the guidelines place a high burden on researchers to learn about and comply with the different rules, and regulations about using GenAI for research. This is not dissimilar from research integrity concerns that researchers have to consider in any case but the complication here, from the perspective of researchers, is lack of clarity around the use of GenAI and an overall lack of transparency related to how technology works.” (Ganguly et al., 2025)
Many journals now require a brief statement (sometimes called an “AI disclosure” or “AI acknowledgments”) if significant portions of text or analysis were AI-assisted. For instance, Nature journals and others have editorial policies that text produced by AI should be attributed in the methods or acknowledgments, not in the author byline.
In sum, the institutional and publisher stance as of 2024-2025 is: embrace the opportunities but do so transparently and with accountability. Researchers using AI in qualitative projects should document their process, double-check AI contributions, and be upfront in publications about the role AI played. By doing so, they not only adhere to emerging norms but also contribute to the field’s collective learning on best practices.
Ethics and responsibility
Qualitative data often contain sensitive information, so de-identifier protocols (e.g., replacing names with codes, removing locational markers) are essential before submitting transcripts to any cloud-based GenAI. Local models offer a privacy-preserving alternative. Because AI can hallucinate plausible but false “participant quotes,” researchers must cross-check every AI-generated summary or code against raw data to maintain trustworthiness. Authorship norms clearly state that AI cannot claim authorship (Porsdam Mann et al., 2024); human researchers bear full responsibility for any AI-assisted content. On the environmental front, large generative models consume substantial electricity and water (U.S. Government Accountability Office, https://www.gao.gov, 2025), so some scholars opt for smaller or shared infrastructure to ease carbon footprints. As oversight evolves, IRBs and professional associations are updating protocols—researchers must articulate data-protection measures and AI validation plans in their proposals. Ultimately, ethical GenAI use demands ongoing reflexivity, community dialogue, and transparency at every stage.
Challenges and limitations of AI in research
GenAI excels at surface-level pattern recognition but lacks deep contextual understanding or reflexivity. As D. L. Morgan (2023) found, chatbots identify straightforward themes but often miss subtler, latent meanings—metaphors or power dynamics that only human analysts detect. AI may regurgitate common tropes drawn from its training corpus, inadvertently marginalizing less frequent voices. Moreover, models sometimes hallucinate—fabricating quotes or blending multiple participants’ responses into a composite that doesn’t exist.
Without a researcher’s critical eye, these missteps could lead to erroneous findings. Unlike a human coder who might ask, “Are we missing a divergent viewpoint?,” AI won’t raise that question unless prompted. Bias is another concern: if training data reflect societal stereotypes, AI outputs may perpetuate them, under-representing minority perspectives. Finally, while AI can apply a deductive codebook consistently, it cannot decide which codes matter most in context. In short, AI’s speed and breadth are valuable, but qualitative rigor ultimately hinges on human judgment for depth, nuance, and interpretive richness.
Best practices and future directions
Maintaining transparency is paramount: save all prompts, raw AI outputs, and researcher notes in a searchable audit trail (e.g., using NVivo or a shared spreadsheet). Whenever AI suggests themes, compare them against manual coding of a subset, calculate inter-coder agreement, and refine prompts iteratively. Involving participants (member checking) can further validate AI-derived findings. Researchers and students must gain AI literacy—attending workshops on prompt engineering, bias mitigation, and data protection. Institutions should provide clear guidelines (Smith et al., 2024) to ensure responsible use rather than ad hoc experimentation.
Looking ahead, qualitative software may allow researchers to fine-tune or train GenAI models on their own dataset, preserving confidentiality while offering project-specific insights (Nguyen-Trung & Nguyen, 2025). NITA exemplifies this by designing customized AI “colleagues” that understand a study’s context. Meanwhile, professional bodies (franzke, aline shakti, Bechmann, Anja, Zimmer, Michael, Ess, Charles and the Association of Internet Researchers, 2020; The Market Research Society, 2025) should issue “living” guidelines—periodically updated checklists for AI disclosures, IRB protocols, and reviewer criteria. Ultimately, the marriage of rigorous methods with innovative AI tools can enrich qualitative inquiry—if we remain vigilant stewards of both data and interpretation.
Conclusion
By 2025, GenAI has transitioned from siloed labs into mainstream qualitative research, thanks to robust performance gains and affordable access. Pioneers like Christou; Friese; Hayes; D. Morgan; Sinha et al. demonstrate that AI can expedite coding and uncover new insights—but only if researchers remain transparent, critically evaluate all outputs, and preserve human reflexivity.
Both academia and industry increasingly converge on responsible AI use—disclosing assistance, safeguarding privacy, and mitigating bias. As models evolve, qualitative scholars must treat AI itself as an object of inquiry: pilot small-scale tests, keep thorough audit trails, and share successes and failures openly. In doing so, we can harness GenAI’s benefits without sacrificing the depth, context, and empathy that define qualitative inquiry.
Learn more about how Lumivero is partnering with researchers to shape the future of qualitative analysis in, “Navigating the AI disruption in research.”
Silvana di Gregorio, PhD, Product Research Director and Head of Qualitative Research at Lumivero
Silvana di Gregorio, PhD, is a sociologist and former academic with a PhD in Social Policy from the London School of Economics. She has been training, consulting, and publishing about qualitative data analysis software since 1995. For 16 years, she had her own training and consulting business, SdG Associates. She is author of, “Voice to Text: Automating Transcription” in Vanover, C., Mihas, P., Saldana. J. (Eds.) Analyzing and Interpreting Qualitative Data: After the Interview, Sage Publications, and “Using Web 2.0 tools for Qualitative Analysis” in Hine, C. (Ed.) Virtual Research Methods. Volume 4, Sage Publications, and co-author with Judith Davidson, “Qualitative Research Design for Software Users,” Sage Publications, and “Qualitative Research and Technology: In the Midst of a Revolution” in Denzin, N. and Lincoln, Y. (Eds.). Handbook of Qualitative Research (4th Edition), Thousand Oaks: Sage, and co-author with Linda Gilbert and Kristi Jackson, “Tools for Qualitative Analysis” in Spector, J.M., Merrill, M.D., Elen, J. (Eds.) Handbook of Research on Educational Communications and Technology. She is part of the Product Team at Lumivero.
References
An, Y., Yu, J. H., & James, S. (2025). Investigating the higher education institutions’ guidelines and policies regarding the use of generative AI in teaching, learning, research, and administration. International Journal of Educational Technology in Higher Education, 22(1). https://doi.org/10.1186/s41239-025-00507-3
Christou, P. (2023a). The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development. The Qualitative Report, 28(9), 2739–2754. https://doi.org/10.46743/2160-3715/2023.6406
Christou, P. (2023b). Ηow to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research? The Qualitative Report, 28(7), 1968–1980. https://doi.org/10.46743/2160-3715/2023.6406
Christou, P. (2025a). Looking Beyond Numbers in Qualitative Research: From Data Saturation to Data Analysis. The Qualitative Report. Advance online publication. https://doi.org/10.46743/2160-3715/2025.7560
Christou, P. (2025b). Reliability and Validity in Qualitative Research Revisited and the Role of AI. The Qualitative Report, 30(3), 3306–3314. https://doi.org/10.46743/2160-3715/2025.7523
franzke, aline shakti, Bechmann, Anja, Zimmer, Michael, Ess, Charles and the Association of Internet Researchers. (2020). Internet Research: Ethical Guidelines 3.0. Association of Internet Researchers.
Friese, S. (2025, May 7). Conversational Analysis with AI (CA to the Power of AI): Rethinking Coding in Qualitative Analysis. https://doi.org/10.2139/ssrn.5232579
Ganguly, A., Johri, A., Ali, A., & McDonald, N. (2025). Generative Artificial Intelligence for Academic Research: Evidence from Guidance Issued for Researchers by Higher Education Institutions in the United States. AI and Ethics, abs/2503.00664. https://doi.org/10.1007/s43681-025-00688-7
Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., Gill, I. S., & Cacciamani, G. E. (2024). Publishers' and journals' instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ, 384, e077192. https://doi.org/10.1136/bmj-2023-077192
hai_ai-index-report-2025_chapter1_final: Chapter 1: Research and Development. https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2025_chapter1_final.pdf
hai_ai-index-report-2025_chapter2_final: Chapter 2: Technical Performance. https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2025_chapter2_final.pdf
Hayes, A. S. (2025). “Conversing” With Qualitative Data: Enhancing Qualitative Research Through Large Language Models (LLMs). International Journal of Qualitative Methods, 24, Article 16094069251322346. https://doi.org/10.1177/16094069251322346
Heyde, L. von der, Buskirk, T. D., Eck, A., & Keusch, F. (2025). GPT pretend you are a survey researcher: Results from a Systematic Literature Review Exploring the Use of Large Language Models within Survey Research. AAPOR. AAPOR 80th Annual Conference, St. Louis, MO.
Kühl, N., Goutier, M., Hirt, R., & Satzger, G. (2020). Machine Learning in Artificial Intelligence: Towards a Common Understanding. https://doi.org/10.48550/arXiv.2004.04686
The Market Research Society. (2025). MRS Guidance on Using AI and Related Technologies. The Market Research Society. https://www.mrs.org.uk/pdf/AI+RelatedTechnologies_MRSGuidance_April2025.pdf
Morgan, D. (2025). Query-Based Analysis: A Strategy for Analyzing Qualitative Data Using ChatGPT.
Morgan, D. L. (2023). Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT. International Journal of Qualitative Methods, 22, Article 16094069231211248. https://doi.org/10.1177/16094069231211248
Nguyen-Trung, & Nguyen, N. L. (2025). Narrative-Integrated Thematic Analysis (NITA): AI-Supported Theme Generation Without Coding. SocArXiv. Advance online publication. https://doi.org/10.31219/osf.io/7zs9cv1
Porsdam Mann, S., Vazirani, A. A., Aboy, M., Earp, B. D., Minssen, T., Cohen, I. G., & Savulescu, J. (2024). Guidelines for ethical use and acknowledgement of large language models in academic writing. Nature Machine Intelligence, 6(11), 1272–1274. https://doi.org/10.1038/s42256-024-00922-7
Sengar, S. S., Hasan, A. B., Kumar, S., & Carroll, F. (2024). Generative Artificial Intelligence: A Systematic Review and Applications. https://doi.org/10.48550/arXiv.2405.11029
Sinha, R., Solola, I., Nguyen, H., Swanson, H., & Lawrence, L. The Role of Generative AI in Qualitative Research: GPT-4's Contributions to a Grounded Theory Analysis. https://dl.acm.org/doi/pdf/10.1145/3663433.3663456 https://doi.org/10.1145/3663433.3663456
Smith, S., Tate, M., Freeman, K., Walsh, A., Ballsun-Stanton, B., Hooper, M., & Lane, M. (2024). A University Framework for the Responsible use of Generative AI in Research. https://arxiv.org/html/2404.19244v1?
U.S. Government Accountability Office, https://www.gao.gov. (April 2025). GAO-25-107172, ARTIFICIAL INTELLIGENCE: Generative AI’s Environmental and Human Effects. https://www.gao.gov/assets/gao-25-107172.pdf