Conversational AI in ATLAS.ti is most effective when guided by clear analytical intent, rich contextual prompts, and iterative follow-up questions. By designing prompts that focus on patterns, comparison, and triangulation—and by saving AI-generated insights as memos—researchers can turn conversation into a rigorous, transparent part of qualitative analysis without sacrificing methodological control.
Conversational AI in ATLAS.ti offers qualitative researchers a new way to engage with their data—by asking questions in natural language and receiving analytical responses grounded directly in project documents. Rather than replacing close reading or coding, it works best as an analytical partner: one that helps you notice patterns, test interpretations, and think across large collections of qualitative material.
Like any analytical tool, the value you get from conversational AI depends heavily on how you use it. Well-designed prompts can support comparison, interpretation, and reflexivity. Vague prompts, on the other hand, tend to produce vague results.
This article outlines practical strategies for designing prompts that strengthen qualitative analysis, using an ethnographic study of coffee shops as a running example.
Conversational AI analyzes the documents in your ATLAS.ti project and generates responses based primarily on that content. To use this feature, researchers first import their project materials—such as articles, interview transcripts, or field notes—into an ATLAS.ti project. The AI then works directly with those selected documents, chatting with multiple documents at the same time and allowing researchers to ask questions grounded in their own data rather than external sources.
AI-generated responses are explicitly tied back to the underlying data. Each response includes direct links to the relevant quotations, making it easy to verify interpretations and return to the original context. This traceability is critical for qualitative rigor, ensuring that AI-supported insights remain transparent, inspectable, and open to researcher judgment rather than operating as a black box.
Importantly, useful AI responses can be saved as memos. This allows insights that emerge through conversation to become part of the project’s analytic record, supporting reflexivity and auditability over time.
Conversational AI can also suggest possible themes and preliminary or provisional codes based on patterns it identifies across documents. These suggestions are especially useful during early-stage analysis, when researchers are orienting themselves in the data and beginning to explore analytic directions—while still retaining full control over what is accepted, revised, or discarded.
Why this matters for qualitative analysis:
Start unlocking deeper insights from your qualitative data with conversational AI in ATLAS.ti.
Effective prompts reflect what you are trying to do analytically—not just what you are curious about.
Imagine you have collected observational data from multiple coffee shop environments: a busy chain location in a city center and several independent cafes in quieter neighborhoods. The volume of data is substantial, and the interactions vary widely. Conversational AI can help you navigate this complexity—but only if you give it a clear analytical task.
Compare the difference between these prompts:
The first is open-ended and ambiguous. The others signal a specific analytical goal. Clear intent helps the AI identify more relevant patterns and comparisons.
Examples of analytically focused prompts:
The goal is not to ask more questions—but to ask better ones.
As a general rule, the more relevant context you provide, the more focused and useful the AI’s response will be.
Consider this prompt: Identify the most important events in my field notes.
“Most important” is ambiguous. Without context, the AI has little guidance on what matters for your study. Now compare it with a more contextualized version:
I conducted an observational study of different coffee shop environments to understand how social interactions differ by setting and neighborhood. I observed a chain coffee shop during rush hour and several independent cafes in less crowded areas. I am particularly interested in whether independent cafes are more conducive to non-transactional conversations. Based on this context, what are the most relevant events in my field notes?
This added detail does two things:
While providing context takes time, it consistently results in richer, more analytically useful insights.
Early in analysis, prompts often support sense-making and familiarization. Conversational AI can help summarize patterns across documents without replacing your own reading.
A useful orientation prompt might be:
Across all field notes, how do customers typically interact with staff in different coffee shop settings?
The AI might surface contrasts such as brief, transactional exchanges in the chain location versus longer conversations in independent cafes—each supported by linked excerpts from your data.
Orientation prompts work best when they:
Comparative analysis is where conversational AI becomes especially powerful. Clear prompts can surface similarities and differences across sites, roles, or contexts.
For example:
How do customer–customer interactions differ between the chain coffee shop and the independent cafes?
This prompt encourages comparison rather than summary. It may reveal minimal interaction among strangers in the chain location compared to spontaneous conversations at communal tables in independent cafes—each supported by specific quotations.
Tips for comparative prompts:
Conversational AI can also support triangulation by connecting observational data with interview accounts.
The prompt below encourages the AI to move across document types:
How do interview participants’ descriptions of interaction align with what is observed in the field notes?
In the coffee shop study, interviews may reinforce observational patterns—such as efficiency in chain cafes and community in independent ones—while also highlighting tensions or contradictions.
This approach is particularly useful for identifying:
One of the strengths of conversational AI is iteration. Initial responses can be refined through follow-up prompts that push analysis further.
Examples include:
Each follow-up prompt narrows your analytical focus and helps test emerging ideas against the data.
When conversational AI generates a useful synthesis or raises a promising interpretation, saving that response as a memo helps keep your analysis organized.
These memos can document:
Used this way, conversational AI becomes part of your memo-writing practice—not a detached or opaque tool.
Certain prompt patterns tend to limit analytical value:
When responses feel thin or generic, revising the prompt—making it narrower, more specific, or more analytical—almost always leads to better results.
Conversational AI works best when treated as a space for analytical exploration. Thoughtful prompt design helps surface patterns, comparisons, and interpretations, while linked data segments and saved memos keep analysis transparent and grounded.
At the same time, ATLAS.ti’s conversational AI is designed to operate within the project environment itself. Project data remains inside the ATLAS.ti system, supporting standard best practices for working with sensitive or confidential data and allowing researchers to explore AI-supported analysis while maintaining appropriate data stewardship and governance.
For both novice and experienced qualitative researchers, learning to design effective prompts becomes a new analytical skill—one that complements close reading, coding, and theory-building rather than replacing them.
Ready to explore your qualitative data with greater clarity and confidence? Discover how ATLAS.ti and its conversational AI features can support deeper analysis and more transparent research workflows.