Author: Christina Silver, Ph.D., Associate Professor (Teaching) & Director of the CAQDAS Networking Project
At the recent Lumivero conference, a few presentations discussed the role of artificial intelligence (AI) in qualitative, quantitative, and mixed-methods research. There were lots of fascinating and engaging presentations across all the conference days, and one that I found particularly interesting in the context of my current work at the CAQDAS Networking Project (CNP) that focuses on the relationship between qualitative software and analytic methods, was by Silvana di Gregorio Ph.D.
Di Gregorio's presentation was titled Disruption and the Rise of AI: Exploring the Role of Technology in Qualitative Research, and she shared the findings from her recent research into how qualitative researchers perceive AI and its role in qualitative analysis. Di Gregorio is a highly experienced qualitative researcher and methodologist who has been deeply involved in the use of technology for qualitative analysis for many years. She’s incredibly well-respected in the field and is someone I’ve known personally for many years, so when I saw she was going to be speaking about the rise of AI in qualitative research there was no way I was going to miss the presentation!
In this article, we’ll discuss di Gregorio’s presentation and dig into AI’s role in data analysis, content analysis, and qualitative research.
This sketchnote, created by blog author Christina Silver, summarizes the key points of di Gregorio’s presentation at the 2023 Lumivero Virtual Conference.
Artificial intelligence (AI) is a term that's been around a long time and is understood in a range of sometimes quite different and contested ways. With respect to qualitative analysis, I prefer a broad definition that encompasses the range of relevant technologies, of which there are many. This definition from Cornell University captures the breadth of technologies often contained within the term:
“Artificial intelligence or AI is the use of machine learning technology, software, automation, and algorithms (the automated computational application of rules) to perform tasks, to make rules and/or predictions based on existing datasets and instructions.” (Cornell Law School, Artificial Intelligence (AI))
Since generative-AI tools like ChatGPT and Google's Bard became widely available, there's been a lot of what I've elsewhere called "hoo-ha" about the impacts on qualitative data analysis and the use of digital tools, with disparate views ranging from horror to band-wagoning (see my series of posts on AI in qualitative research).
And that's why di Gregorio 's presentation that presented her research on perceptions amongst qualitative researchers really piqued my interest.
Di Gregorio framed her presentation around how the rise of generative-AI is causing disruption in the field of qualitative research, but she was quick to emphasise; that this form of AI is neither the first nor only form that has impacted how we go about doing qualitative analysis with the support of computers. I've also written about; that this form of AI is neither the first nor only form that has impacted how we go about doing qualitative data analysis with the support of computers.
To illustrate this, di Gregorio recounted a presentation she gave in 2020 discussing four types of AI that impact qualitative research: Natural Language Processing; Speech; Vision; and Admin Assistants.
Fig. 1 From di Gregorio, S. (2020) Can AI help you? Leveraging your human skills in a digital world, paper presented at QRCA Annual Conference: Keep Qual Human, 29-31 January, Austin, TX.
What's happening now though is causing much more disruption because of the ubiquitousness of generative-AI and how it’s being reported on and discussed – particularly in the media, but also in qualitative research circles.
"It's a disruption that no one can ignore because it's become sort of the rage. It's in the media all the time. There's a lot of fears about it. There's also a lot of excitement about it.” - Silvana di Gregorio
The AI tools that have been available in qualitative software for a while (and in some cases decades) didn't receive the level of attention that the generative-AI tools are receiving right now. And that's at least in part because they didn't infiltrate into our everyday lives in the same way.
Di Gregorio rightly reminded us that disruptions are common in the development of research practice, and that disruption of any kind actually forces us to rethink and take stock of what we do, and how we do it. This is of value in moving both methodology and technology forward; disruptions have their challenges, yes, but also often bring welcome advances when they're seen as opportunities.
Di Gregorio used Covid-19 as an example of a recent major disruption on how qualitative research practice happens, particularly in how data could be collected during lockdown. The effect this has had on methodological discussions about the role of digital tools in qualitative analysis and particularly online research methods more generally, I believe is a good thing. Online data collection and other research practices were well-established in some quarters pre-Covid (i.e., the first edition of Janet Salmons’ excellent textbook on Doing Qualitative Research Online was published in 2016, with the second edition coming out in 2021.)
However, many qualitative researchers perhaps fell into more customary, off-line, practices pre-Covid out of habit. There is no doubt that Covid forced everyone to consider online methods in previously unprecedented ways, and that is great for the development of methods. We also saw an upsurge in the availability of tools to support the full range of research needs, for example developments in automated transcription tools, data collection and communications tools like Zoom and Microsoft Teams, and collaboration tools like Google Drive and Google Docs.
In the first series of podcasts I've recorded on computer-assisted qualitative analysis (CAQDAS chat with Christina), several of my guests spoke about generative-AI and its methodological implications. Founders of the CAQDAS Networking Project, Professors Nigel Fielding and Ray Lee, recounted in my first podcast episode the disruption that the initial advent of qualitative software brought. They recall disruption being clearly evidenced at the first conference on the topic that they organised in 1989 with researchers' views on the opportunities and dangers being starkly divergent. The parallels with what's happening now with some qualitative software programs harnessing generative-AI in qualitative methods isn't lost on them, di Gregorio, myself, or many qualitative methodologists I've spoken to.
Di Gregorio recounted findings in her presentation from her research about what qualitative researchers think about generative-AI, and this formed the bulk of her presentation last month. During the summer of 2023, she gathered thoughts from researchers working in different sectors with varying amounts of qualitative research experience. In her first survey, she was keen to hear from researchers regardless of whether they’d used generative-AI or not.
To me, this was a great way to uncover perceptions because the full range of opinions, from the advocates of these new technologies to the sceptics, are relevant whether they're grounded in the experience of using these tools or not. This is important because perception of technology has an impact on the field of computer-assisted qualitative data analysis.
For example, when it comes to how we teach qualitative methods and tools at college and university, the perceptions of the faculty doing the teaching are of fundamental influence. Where courses are taught by faculty who do not use qualitative software like NVivo themselves, a complete absence of discussion or tutelage is not uncommon. The perceptions of those who are teaching current and future generations of qualitative researchers can shape the landscape. In the rapidly developing field of generative-AI and the potentially transformative effect it will have on analytic practice, we do our students even more of a disservice to ignore, sideline, or fear the developments. Therefore, knowing what researchers all the way along the "excited to sceptical" spectrum think is incredibly important.
"…if you don't get involved, you know, your students will get involved with it. It's something that we need to understand, and it will have an impact on our research." - Silvana di Gregorio
Di Gregorio followed up the survey with follow-up interviews with selected researchers to gain more in-depth understanding of perceptions – sharing with the conference audience the breadth of perceptions from these two sources. The topline findings from the survey responses and interviews can be seen in di Gregorio’s presentation.
Perhaps the most frequently mentioned potential benefit of generative-AI (and other AI-powered tools) in qualitative analysis I've heard is that it will save time. It seems finding short-cuts is high on the agenda for many. But coming out of di Gregorio 's research is also the potential for these tools to actually add something, nicely articulated in this quote from one of the respondents in her research:
“The way that the AI gave the answers was really interesting and thought provoking because it gave some perspectives which I didn't necessarily think of … So the AI went a step deeper and probably included some dimensions or perspectives which I wouldn't have done by myself.” - Doctoral student, some qual experience
This is something I was pleased to see; for me the role of any technology in the practice of qualitative analysis is about how to harness what computers can do to contribute to the quality and relevance of our analysis, not just being able to do it quicker. It is such principles that underlie many of the AI tools that have been incorporated into qualitative software before the rise of generative-AI, and it's important that they also drive how these newer computational capabilities are embraced by developers and researchers alike.
It's clear to me that as a community of practice we need to know more, and the respondents in di Gregorio 's research thought so too. Despite the length of time software like NVivo has been available, there's still a lack of understanding from some quarters about what the software can do and how to use them. As expressed by one of di Gregorio 's respondents, this is also the case with respect to generative-AI:
“I think the level of understanding of AI among qualitative researchers is currently quite low, so it is really important to explore further what can be done and have conversations about how it should or could be used.” - Respondent in di Gregorio's research
Di Gregorio's research will contribute to raise awareness about these technologies amongst the qualitative research community of practice, so I'm very much looking forward to seeing it formally published. There’s a massive appetite for research on this topic and it’s great to see that di Gregorio is amongst those at its forefront.
As Product Research Director, Di Gregorio has a direct line into the development of tools in NVivo, and her research findings around AI tools are actively being taken into consideration.
“… in doing this research, you know, Lumivero, we've been listening to you and we are committed to develop our software addressing your needs and concerns. And one thing is that I'd like to create a customer advisory group on generative AI. And the purpose would be get feedback from you from a diverse set of customers about how you'd like it in our products, you know, and also, feedback on prototypes we create, but also, you know, the ethical, privacy concerns and how they need to be addressed by us. So … if you want to continue the conversation … I look forward to hearing from your comments.” - Silvana di Gregorio
For qualitative researchers this is excellent to hear. If you’re an NVivo user, or a user of any of Lumivero’s products, I’d encourage you to get involved. Developers of software – in my 25+ years’ experience spanning the CAQDAS field – are demand-driven. They want to know what researchers need, and they try to develop tools to meet those needs. It’s our responsibility as researchers and users of their products, to let them know what those needs are.
At the CAQDAS Networking Project we're also hoping to contribute to the awareness-raising effort. One way we're doing so is by partnering with the Social Research Association (SRA) to organise a 2-part symposium on the topic later this year.
We're really excited that di Gregorio has agreed to be part of that event because it’s clear she will make a valuable contribution. She'll be participating in the "in conversation with…" session on the methodological implications of AI in qualitative analysis - and not just generative-AI, but AI in general. So, if you're interested in these topics, be sure to check out the program and register as soon as possible. It's a free event, and we hope to gather thoughts from qualitative researchers as well as to share some of what's occurring in this space.
Register Today – Free Two-Part Symposium on AI in Qualitative Analysis
Part 1: Nov. 24, 2023
Part 2: Dec. 1, 2023
We're at the early stages of integrating generative AI into NVivo. To ensure our decisions align with our community's needs and maintain our commitment to excellence, we're launching an AI Advisory Group. We invite researchers, tech experts, and enthusiasts to join us on this journey.
If you're passionate about qualitative research and excited about AI's potential, we'd love to hear from you. Apply today to be a part of this transformative journey. Together, we're shaping the future of NVivo with AI!
Conducting a thorough critique of the literature is incredibly important, but as a writer, you may feel daunted by the enormity of the task. Following these 10 tips can help you focus your writing efforts. These tips can also help you write a literature review that moves beyond summarizing the research and toward critiquing it well.
Going beyond a summary and creating a discussion of published work can be accomplished through the thorough tracking of sources and the source highlights and links. While this can sound tricky and time consuming when referencing hundreds of source materials ranging from journal articles and books to research papers and videos, reference manager and writing tools like Citavi can help streamline this process (and more as we’ll cover in the upcoming tips).
A literature review is a well-reasoned, evidence-based, scholarly argument that demonstrates the need for your study. While your literature review will contain a great deal of information, it is not (primarily) an informative text. Keeping this in mind at the outset can lead you toward a critique that situates your study within the scholarly discourse relevant to your research topic.
Learn more in the on-demand webinar Conducting and Constructing a Literature Review for Maximum Impact.
A well-written literature review thoroughly analyzes and critiques the key concepts or quantitative variables central to your research topic. These key concepts or variables are generally expressed in a problem statement, so having a problem statement drafted can help you align your literature review to your research topic. For instance, rather than writing about “Burnout in Education,” your problem statement could lead you to focus your review on “Burnout in K-12 School Leaders.” This narrowed focus makes your literature review relevant and, importantly, doable.
Learn more about writing a compelling argument and developing your voice in the free on-demand trainings from the Research and Technical Writing Institute.
Even though your outline is likely to change, create a document with headings that describe the pockets of literature you will review. In the above example about burnout in school leaders, you might have a heading called "Factors Influencing Burnout." You might already know that some factors to consider are lack of work/life balance, lack of resources, and dissatisfaction with pay and benefits. Create those subheadings.
If you use a reference manager like Citavi, you can breeze through this step! With Citavi, you can save your sources directly in the program, create your literature review outline within the knowledge organizer, then export it to Word.
The headings in your lit review outline can be used as keywords to search for relevant literature. Remember to document your search strategy and use synonyms. You might also locate a systematic review on your research topic, which is rich with references. If you have Citavi, data bases like Scopus and EBSCCO integrate with the software – letting you easily search for sources. You can also use the Citavi Picker which helps bring sources in from sites like Google Scholar by identifying ISBNs and DOIs on web pages and sending reference information to your Citavi project.
We recommend using reference management software such as Citavi to organize your research articles. This saves you tremendous time as Citavi helps you methodically manage quotes, sources, notes, and articles.
If that isn’t an option, create folders and save your research articles as the in-text citation (e.g., an article by Parker et al. 2021 would be saved as such). Having one folder for all of your articles is the equivalent of piling your desk with stacks of articles that you can't remember if you have read or not. If you organize your research articles, you will be able to review all of the articles that relate to a specific topic in your literature review.
Learn more in this on-demand webinar Organizing Information in Your Field of Study.
This step is critical to literature review success. You will search for trends in the literature. Therefore, you need to extract relevant information from articles and group this information together to analyze it. Writers often begin by sharing the results of one study, then the next, and so on, without offering up any synthesis of the literature. Synthesis is the result of analysis, and analysis needs to encompass articles that are grouped in some way. In the burnout example above, you may have extracted several findings that demonstrate that lack of work/life balance is a major factor in school leader burnout. You will want to state this finding clearly and review all of the articles about it together, so go ahead and group them in an annotation table at this stage.
An alternative to the annotation table is Citavi’s knowledge organizer which essentially replaces an annotation table. This feature in Citavi lets you save notes, memos, and quotes from articles in the knowledge organizer while still linking to the original source. Even better, you can add categories to your notes, memos, and sources based on your keywords and themes.
Once you have annotated several articles, analyze them for patterns, discrepancies, and gaps. A pattern could be a similar finding that you have noticed across several studies. It could also be a pattern of participants (e.g., the phenomenon has mostly been studied in female-identifying participants) or methodology (e.g., 10 of the 12 studies are quantitative). Often, we can infer from a pattern to identify a gap in the literature. Using NVivo in your literature review can help you find the patterns and themes in your literature, piece together which researchers often write together, and keep you organized throughout the process of synthesizing literature.
Learn more in the on-demand webinar Accelerating your Literature Review with Citavi & NVivo 14.
So, you located a pattern, discrepancy, or gap in the literature, what next? Make sure that you state your finding clearly and concisely in the form of a synthesis statement. For instance, "Much of the research regarding school leader burnout focuses on the reasons why school leaders burnout" is a synthesis statement. Reporting that a single author "X" found something interesting is not.
As you report your findings, place your synthesis statements as topic sentences (main ideas) of the paragraphs you write. Then put the evidence you pull from your studies to support that main idea. A hallmark of well-synthesized writing is that paragraphs weave information from several studies together around a central claim. Using the MEAL plan structure (Main Idea, Evidence, Analysis, Link) can help you craft paragraphs that are cohesive and analytical — hallmarks of good literature review writing.
Learn more in this on-demand webinar from the Research and Technical Writing Institute, Developing Your Voice: How to Paraphrase, Make Claims, and Synthesize Literature.
When you are writing your literature review, you are wielding large amounts of information, and you are likely writing in complex ways that are likely new to you. As with all writing, expect that you will need to revise your work. Schedule time and, if necessary, ask for help about areas that you need to revise. Then, systematically, dive into your writing (e.g., do not revise for everything at once).
The above tips are important because they provide much-needed structure for you as you write your literature review. Often, writers set out with vague notions about what a literature review is, and the process begins to feel amorphous. These tips, and reference management software like Citavi, can help you break the process of writing a literature review down, organize your notes and sources, automatically create citations, and bring focus to the writing process. Return to this list again and again if you feel lost in “literature review land.” They will help you regain your footing and return to your writing with a renewed sense of clarity.
Dissertation by Design is an organization that supports and coaches all types of writers including students, academics, and research professionals.
See how you can streamline your literature review by requesting a free demo of Citavi today
Writing a literature review can be one of the most daunting aspects of the academic research process—and one of the most misunderstood. Dr. Robert Thomas, Lecturer in Marketing and Strategy at Cardiff University’s Cardiff Business School in Wales, is a subject-matter expert for SAGE Publishing who has written a book (“Turn Your Literature Review into an Argument – Little Quick Fixes”) and developed an online course (“Conduct a Literature Review”) that focus on this crucial stage of the academic process.
In a webinar hosted by Citavi, Conducting and Constructing a Literature Review for Maximum Impact, Dr. Thomas took an audience on a deep dive into the literature review process. He outlined the different types of literature reviews, what an effective literature review should aim to achieve, how to select and organize material, and how to craft a compelling review that establishes the foundation for an entire research project or dissertation.
He also emphasized the idea that a literature review is more than a summary of source materials. It’s an opportunity to refine your research question, demonstrate to reviewers that you have the capabilities to recognize and evaluate significant scholarship in your field, and identify gaps in the knowledge that your research can address.
“[A literature review] is a chance for you to embrace and become part of the argument.” – Dr. Robert Thomas
Dr. Thomas recommends four different types of literature reviews:
Systematic Review
This type of review has a pre-defined research question and aims to summarize as much of the scholarship on a particular question as possible. An example of this type of review is a meta-analysis which draws on research into a particular problem and then evaluates findings according to a common standard to arrive at an “answer”, or at least a general summary of findings across the literature. This type of analysis is usually conducted by experts in the field.
Chronological Review
As the name suggests, chronological reviews evaluate scholarship in the field in a linear progression from the earliest historical sources to contemporary research. The idea is to demonstrate the importance of a field of study by tracing its development from the past to the present.
Methodological Review
This type of review is rare. Instead of looking at theories or research outcomes in a given field, the methodological review looks at the steps taken by researchers to develop the evidence for their theories. A methodological review, for example, may evaluate sampling techniques or data analysis procedures.
Integrative Review
This is the most common type of literature review, which draws on aspects of the prior three types of literature review to produce a document that is broad in historical scope while also justifying the methodological approach you plan to take in your research. An integrative review also combines theory and data to provide a focus for the overall project.
Dr. Thomas recommends a step-by-step process for developing an integrative literature review that can establish your credibility—and confidence—as a researcher in any field.
Step 1: Locating and Capturing Sources
First, Dr. Thomas emphasized that a literature review should draw on a wide range of sources including contemporary journals as well as books. He recommended conducting keyword searches on journals and in databases to begin identifying sources.
Step 2: Taking Robust Notes that Spark Critical Evaluation
Once you’ve identified potential sources, you need to read and take notes. Dr. Thomas recommends using a structured notetaking approach such as the Cornell Notetaking System — one that captures essential data and facts about the source as well as allowing you to record your thoughts, questions, and criticisms of the material. Taking notes is not just about summarizing the text you read, Dr. Thomas explains, but about ensuring that what you read can be “reduced down to what it means to you and the work [you are doing].”
Step 3: From Broad Ideas to Fine Details
A theoretical framework places your proposed research within the context of a body of scholarship and shows how you plan to produce new knowledge that can address gaps in a specific area of the field. Dr. Thomas gave the example of a researcher who wants to explain why people purchase Omega luxury watches.
In this example, the theoretical framework would begin with looking at evidence for the broader issue of consumer decision-making and the influence of marketing, then gradually digging down to evidence that drives purchases in the luxury watch market. From there, you would plan how to demonstrate the factors that lead people to choose Omega watches in particular.
Step 4: Maintaining Authenticity as an Author
Literature reviews require a great deal of analysis and synthesis of information. When developing your project, it can be tempting to make use of AI summaries or other ghost-writing tools to quickly produce an evaluation of a source.
Dr. Thomas emphasizes that you must avoid this at all costs. First, creating your own summary or paraphrase of a source’s theories helps you internalize and better understand it. Second, using ghost-writing tools can lead to plagiarism.
Step 5: Making the Case for Your Contribution to the Field
Finally, Dr. Thomas emphasizes that a literature review offers you an opportunity to present a balanced view of your subject area that acknowledges opposing evidence or perspectives. “You will [receive more credit] for actually accrediting those who do not support what you do than you will for creating something descriptive that [only] supports your research question.”
However, in addition to evaluating opposing evidence, you must also provide an effective rebuttal that demonstrates why your research ultimately matters. The ultimate goal of your literature review is to guide your readers through a labyrinth of texts, theories, and ideas toward the unique contribution you plan to make as a scholar.
Citavi supports every aspect of the literature review development process – from finding, evaluating, and taking notes on sources to constructing outlines, citations, and bibliographies.
Step 1: Finding Sources with Citavi
To help with locating sources, Citavi makes it possible for you to begin capturing potential sources from anywhere on the web or within the app itself. Carry out keyword searches of freely available databases — or data sources your institution has licensed — from within Citavi. Enter an IBSN or DOI number to automatically generate a citation. Import details about source materials you’ve found on the web using the Picker browser extension or drag-and-drop PDFs and references from other systems straight into your Citavi project. Citavi will also generate a bibliography as you add references and automatically generate citations that meet many different style guidelines.
Step 2: Writing and Organizing Notes with Citavi
Citavi is also designed to account for the changing nature of the internet. When you import a website address as a reference, Citavi will automatically generate a PDF of that page at the time you accessed it. This way, if the reference is changed or taken offline, you’ll still have access to the original information.
With Citavi, you can record notes, questions, quotations, comments, and summaries within the record for each reference — even on the PDF version of the document itself. These notes can then be organized into an outline which you can use for the next phase of the literature review process: establishing a theoretical framework for the research you plan to conduct.
Step 3: Building Frameworks with Citavi
Citavi makes it possible for you to begin building this “funnel” framework as you read and organize your source materials. You can build an outline for your literature review using the notes you have keyed into your project — an outline that’s easily exportable to a Word document, complete with citations and links – and then begin writing.
Step 4: Maintaining Authenticity as an Author with Citavi
By automatically connecting your sources to references you make, Citavi helps reduce the risk of unintentional plagiarism in your work.
If you’re looking for more ways to accelerate your literature review, Citavi also integrates with qualitative data analysis software, NVivo, which can help you identify themes in your sources. Watch this webinar to learn more about how using NVivo and Citavi together helps you go beyond simple reference management and creates a springboard to analyze your literature, connect it to your empirical data, analyze it with NVivo’s research tools, and publish your work faster:
>>Watch on-demand webinar: Accelerating your Literature Review with Citavi and NVivo 14
Access additional free literature resources here: Lumivero - Accelerating Your Literature Review
Make the literature review process more manageable with the all-in-one referencing and writing solution designed for individual researchers or teams. Download a free trial of Citavi or buy a subscription today.
The practice of thematic analysis is widely used in qualitative analysis but sometimes not acknowledged or confused with other approaches. Here at Lumivero, we break down the ambiguities of thematic analysis as an approach and hope this interpretation can breathe new life into it as new and emerging forms of content become more integral to the already established research tool.
NVivo offers a powerful solution for conducting thematic analysis due to its robust features and easy-to-use interface. The software is designed to enable researchers to quickly and accurately analyze large amounts of data and uncover underlying themes.
Thematic analysis is not a methodology but a tool which can be used across different methods (Boyatzis 1998) and was first labeled as an approach in the 1970s (Merton, 1975). It is used to find common themes in content such as:
This practice is dynamic. It can be done manually (by hand), in Excel, or with thematic analysis software or Computer Assisted Qualitative Data Analysis (CAQDAS) software tool. It traverses traditional qualitative research and quantitative data, allowing researchers to ask more questions of their content and conduct thematic analysis from large data sets like interviews.
In Methods: Teaching Thematic Analysis, Virginia Braun and Victoria Clarke describe viewing thematic analysis as “theoretically flexible because the search for, and examination of, patterning across language does not require adherence to any particular theory of language, or explanatory meaning framework for human beings, experiences or practices" -- allowing for thematic analysis to be applied within a wide variety of theoretical frameworks.
Thematic analysis as a tool is especially versatile as it can be helpful for those new or experienced in research and for its ability to be used in a range of research categories and theoretical perspectives.
In the Thematic Analysis Using NVivo 14 webinar, Ben Meehan Ph.D. discussed Braun and Clarke’s six steps of thematic analysis.
To learn more about using NVivo 14 for thematic analysis, watch our on-demand webinar Thematic Analysis Using NVivo.
Put simply, you may be looking for the right way to explain or express patterns in your content. Consider this example: you are analyzing representations of women on social media. You want to collect data from Facebook, Twitter and YouTube as rich datasets so you can access the online conversations and content about your research, organization or topic of interest, but also the valuable data behind the comments, like demographics and locations.
The challenge with importing, managing and analyzing different content types is how do you find the similarities or differences in the media before you? What do you do with it then?
To better understand when to use thematic analysis and for general best practices for thematic analysis, check out the on-demand webinar by Braun and Clarke Introduction to Thematic Analysis.
Thematic analysis helps you find connections in your content and understanding the underlying themes to help inform decisions.
Braun and Clarke encourage thematic analysis as the starting method to teach students new to qualitative research. “[Thematic analysis] is accessible, flexible, and involves analytic processes common to most forms of qualitative research. Students can progress from [thematic analysis] to grounded theory, IPA and discourse analysis, or progress from producing largely descriptive [thematic analysis] to producing rich and complex, conceptually informed [thematic analysis].”
Thematic analysis encourages researchers to use queries to ask complex questions and identify new meaning in your data. Test ideas, explore patterns and see connections between themes, topics, people and places in your project. Look for emerging themes, find words and discover concepts using text search and word frequency queries.
Thematic analysis can be used as a technique on its own or it can be used as a first step in a variety of methodological approaches to analyzing qualitative data including:
Once you do this, you can search for content based on how it's coded using coding queries. Check for consistency and compare how different users have coded material using coding comparison queries. Cross-tabulate coded content and explore differences in opinions, experiences and behavior across different groups using matrix coding queries.
By visualizing your insights, you can explore even further. Get a sense of the larger trends happening and dive in deeper. Discover a new perspective. Identify new and interesting themes. Share results with others.
Visualizations can also provide an easy way to communicate findings with broader audiences.
Easily understand how content plays a role in influencing decisions or behaviours.
Gain an advantage with NVivo – powerful software for qualitative data and content analysis that helps you make insight-driven decisions.
NVivo has a wide range of visualizations. Below are a few which are particularly useful to thematic analysis:
NVivo 14 provides additional advantages for thematic analysis for interview and document analysis with these powerful features:
Editor's note: This blog was originally published in March 2017, and was updated in February 2022 and October 2023 for accuracy.
For more information about thematic analysis see these resources:
For more information about thematic analysis see these resources:
Boyatzis, R. E. (1998). Transforming qualitative information: Thematic analysis and code development. Thousand Oaks, CA: Sage.
Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, Qualitative Research in Psychology, 3(2), 77–101.
Merton, R.K. (1975). Thematic analysis in science: Notes on Holton’s concept. Science as Culture, 188(4186), 335–338.
In our fast-paced digital world, we're drowning in a sea of text data every day. Handling unstructured data manually is a daunting task – it's time-consuming and prone to errors. But here's the good news: we've got a powerful ally in our corner – AI autocoding with NVivo!
NVivo has been a pioneer, leading the charge in integrating technology to enhance qualitative research. Our journey into harnessing the potential of Artificial Intelligence (AI) began in 2015, and we haven't looked back since. From auto-coding themes to sentiment analysis and using existing coding patterns, we've been at the forefront of making qualitative research smarter and more efficient. And in 2018, we introduced NVivo transcription, further cementing our commitment to innovation.
In a video tour, Silvana di Gregorio, Lumivero's Head of Qualitative Research, unveils how NVivo leverages machine learning to automate data coding. This isn't just about coding; it's about unlocking the AI magic within NVivo and how it can supercharge your text analysis.
When it comes to thematic analysis, the first challenge is identifying the key themes hidden in your data. NVivo’s autocoding text analysis tool quickly helps you identify key themes by analyzing your material (as a single file or combination of items) using a language pack which you download onto your computer. It detects themes by identifying noun phrases, grouping them, and tagging each idea –assigning significance to some themes over others based on how frequently each noun phrase appears. NVivo groups the noun phrases under broad themes and codes each theme, including child codes for noun phrases within each theme. You can preview this provisional coding and only include the coding that makes sense to you. It kick-starts your coding process. It's like having a theme detective by your side!
Watch short video: NVivo AI-Powered Autocoding to Identify Themes
Training NVivo to auto-code is the best of both worlds. You start by coding a portion of your data, and NVivo uses your coding patterns to do the rest. NVivo compares each text passage—for example, sentence or paragraph—to the content already coded to existing codes. If the content of the text passage is similar in wording to content already coded to a code, then the text passage will be coded to that code. Say goodbye to the manual grind – this machine learning wizardry facilitates the efficient and accurate analysis of large volumes of text data, transforming the way researchers handle big data.
Watch short video: NVivo AI-Powered Autocoding Using Existing Coding Pattern
NVivo doesn't stop at themes – it also offers sentiment analysis. It identifies emotive words and assigns a sentiment score between minus one to positive one. The score for the word can change if it is preceded by a modifier e.g. more or somewhat. NVivo then assigns codes ranging from very positive to very negative.
After autocoding to identify sentiment, you can access a chart which provides a visual representation of your results, indicating the names of the codes and the number of coding references for each code. NVivo’s sentiment analysis makes it a breeze to extract insights from your text data. It's like having a mood detector for your data!
Watch short video: NVivo AI-Powered Autocoding Using Sentiment Analysis
While NVivo does a lot of the heavy lifting, it's not infallible. The codes it generates benefit from your expert touch to ensure accuracy. Sarcasm and nuances may escape its grasp. But here's the beauty: NVivo keeps you in control. The process is transparent, giving you the final say.
Generative AI tools, like ChatGPT, have opened new avenues for qualitative research. They can mimic human interactions, generate content, and offer fresh perspectives on your data. However, our research shows it brings challenges like privacy, security, transparency, and accuracy. At Lumivero, we're committed to responsible AI adoption and addressing your concerns.
We're at the early stages of integrating generative AI into NVivo. To ensure our decisions align with our community's needs and maintain our commitment to excellence, we're launching an AI Advisory Group. We invite researchers, tech experts, and enthusiasts to join us on this journey.
If you're passionate about qualitative research and excited about AI's potential, we'd love to hear from you. Apply today to be a part of this transformative journey. Together, we're shaping the future of NVivo with AI!
The actionable insights shared by di Gregorio in this video tour NVivo Automated Insights, are invaluable for researchers working with large volumes of text data as NVivo offers an efficient and accurate way to handle unstructured data – making the process less daunting and more productive.
NVivo’s AI autocoding feature has truly revolutionized the approach to handling textual data and unearthing underlying themes and sentiments. Whether you are a seasoned researcher or new to the world of research, NVivo can help you code themes and sentiments for text analysis – effortlessly leading to deeper insights.
Start transforming your research with AI by requesting a free demo of NVivo today!
Qualitative Data Analysis Software (QDAS) allows researchers to organize, analyze and visualize their data, finding the patterns in qualitative data or unstructured data: interviews, surveys, field notes, videos, audio files, images, journal articles interviews, web content etc.
What is the difference between quantitative and qualitative data analysis. As the name implies, quantitative data analysis has to do with numbers. For example, any time you are doing statistical analysis, you are doing quantitative data analysis. Some examples of quantitative data analysis software are SPSS, STATA, SAS, and Lumivero’s own powerful statistics software, XLSTAT.
In contrast, qualitative data analysis "helps you understand people’s perceptions and experiences by systematically coding and analyzing the data", as described in Qualitative vs Quantitative Research 101. It tends to deal more with words than numbers. It can be useful when working with a lot of rich and deep data and when you aren’t trying to test something very specific. Some examples of qualitative data analysis software are MAXQDA, ATLAS.ti, Quirkos, and Lumivero’s NVivo, the leading qualitative data analysis software.
When would you use each one? Well, qualitative data analysis is often used for exploratory research or developing a theory, whereas quantitative is better if you want to test a hypothesis, find averages, and determine relationships between variables. With quantitative research you often want a large sample size to get relevant statistics. In contrast, qualitative research, because so much data in the form of text is involved, can have much smaller sample sizes and still yield valuable insights.
Of course, it’s not always so cut and dry, and many researchers end up taking a «mixed methods» approach, meaning that they combine both types of research. In this case they might use a combination of both types of software programs.
Learn how some qualitative researchers us QDA software in the on-demand webinar Twenty-Five Qualitative Researchers Share How-To's for Data Analysis.
Qualitative Data Analysis Software works with any qualitative research methodology used by a researcher For example, QDAS can be used by a social scientist wanting to develop new concepts or theories may take a ‘grounded theory’ approach. Or a researcher looking for ways to improve health policy or program design might use ‘evaluation methods’. QDAS, like NVivo doesn’t favor a particular methodology—it’s designed to facilitate common qualitative techniques no matter what method you use.
NVivo can help you to manage, explore and find patterns in your data but it cannot replace your analytical expertise.
Handling qualitative data is not usually a step-by-step process. Instead, it tends to be an iterative process where you explore, code, reflect, memo, code some more, query and so on. For example, this picture shows a path you might take to investigate an interesting theme using QDAS, like NVivo:
Every research project is unique — the way you organize and analyze the material depends on your methodology, data and research design.
Here are some example scenarios for handling different types of research projects in QDAS—these are just suggestions to get you up and running.
A study with interviews exploring stakeholder perception of a community arts program
Your files consist of unstructured interview documents. You would set up a case for each interview participant, then code to codes and cases. You could then explore your data with simple queries or charts and use memos to record your discoveries.
A study exploring community perceptions about climate change using autocoding with AI
Your files consist of structured, consistently formatted interviews (where each participant is asked the same set of questions). With AI, you could autocode the interviews and set up cases for each participant. Then code themes to query and visualize your data.
A literature review on adolescent depression
Your files consist of journal articles, books and web pages. You would classify your files before coding and querying them; and then you could critique each file in a memo. With Citavi integration in NVivo, you can import your Citavi references into NVivo.
A social media study of the language used by members of an online community
Your files consist of Facebook data captured with NCapture. You would import it as a dataset ready to code and query. Use memos to record your insights.
A quick analysis of a local government budget survey
Your file is a large dataset of survey responses. You would import it using the Survey Import Wizard, which prepares your data for analysis. As part of the import, choose to run automated insights with AI to identify and code to themes and sentiment so that you can quickly review results and report broad findings.
Since projects (and researchers) are unique there is no one 'best practice' approach to organizing and analyzing your data but there are some useful strategies to help you get up and running:
Using QDAS, like NVivo to organize and analyze your data also increases the 'transparency' of your research outcomes—for example, you can:
QDAS, like NVivo can demonstrate the credibility of your findings in the following ways:
Many QDAS have integration with other software to enhance your research process. NVivo integrates or can be used with the following software:
To learn more about how QDAS, like NVivo can assist you with your research trail, request a free 14-day trial of NVivo or request a demonstration.
Save valuable time in your data analysis with three new features and three feature improvements in part two of the latest XLSTAT release!
Now you can view all the automatically generated graph options before making your choice and incorporate tornado diagrams and back-to-back histograms to help you easily improve your analysis of groups. Plus, this release includes an update to the PROCESS feature which lets you use covariables, new models, and see conditional effects for deeper marketing insights.
Dig deeper into XLSTAT’s new features and improvements with the highlights below.
New Feature Available on Windows in Beta, Basic +, All Applied, and Premium
Automatically build the right graphs for your data! With the automatic data visualization feature, you don’t have to worry about which graph to use as it will propose several graphs according to your data. Simply select your quantitative and qualitative variables in the “Dataviz” dialog box and let the feature guide you. Once you’ve chosen your graph, click on “export”.
New Feature Available in All Solutions
Are your groups similar for all measurements, or perhaps only one or two are different? Make a tornado graph to find out! With the new tornado diagram feature, you can quickly compare the measurements of two groups and obtain a beautiful, informative tornado chart.
New Feature Available in All Solutions
Need to figure out if your groups have the same distributions? Compare the distributions of two groups back-to-back with this new feature located in Tornado charts.
Improved Feature Available in Marketing and Premium
Determine the conditional effects and use more sophisticated models with covariables to analyze mediation and moderation with the improved PROCESS feature. Plus, a new graph for visualizing the conditional effect of a variable on a mediator is now available!
Improved Feature Available in Marketing and Premium
More accurately determine the right sample size required for research and ensure that you achieve the desired level of precision with the improved sample size calculation feature. To do this, simply choose the standard deviation you want from the sample size dialog box.
Improved Feature Available in Forecasting and Premium
Quickly compare your models with XGBOOST in the Easy Fit feature which lets you run and compare several models. Plus, if you’re familiar with Excel and XLSTAT formulas, you can run XGBOOST directly in an Excel cell!
How to Install the Update?
This new version will give you access to all the new features mentioned above. The installation of our new version is recommended for all users.
If you have a valid XLSTAT license with access to maintenance and upgrades, you can download the new version for free.
If you are currently using our trail version, you can purchase a license of XLSTAT to access these new features.
Never tried XLSTAT before? Download your free trial today!
Project management is a multifaceted endeavor that involves careful planning, resource allocation, risk assessment, and timely execution. Despite thorough planning, unexpected issues can arise that can derail project timelines and devastate budgets. But with Monte Carlo simulation, a powerful technique that calculates all possible scenarios and the probability they will occur, project managers can anticipate and address potential roadblocks.
Monte Carlo simulation is a computerized, mathematical technique that runs thousands of simulations with random variables, pulled from either historical data or expert opinion, to determine the range of outcomes for a scenario. This technique can be leveraged in project management for planning project tasks, completion times, task durations, point estimating, and best- and worst-case scenarios in industries such as transportation planning, defense, aerospace engineering, and construction.
In this article, we'll delve into ten common project management issues that can be effectively anticipated and mitigated using Monte Carlo simulation.
Accurate project estimation is a fundamental challenge. Monte Carlo simulation can help project managers by incorporating uncertainty into estimations, generating a range of possible outcomes, and providing a probability distribution of project completion dates and costs.
Resource availability can fluctuate during a project's lifecycle. By simulating various resource allocation scenarios, project managers can identify potential bottlenecks, plan for contingencies, and allocate resources optimally.
Project tasks are often interdependent. Monte Carlo simulation can model the relationships between tasks, assessing how delays in one task might affect others. This helps in understanding critical paths and potential delays.
Scope creep can cause projects to veer off track. By simulating the impact of scope changes, project managers can determine how modifications might influence project timelines, budgets, and overall goals.
5. Uncertain Duration of Tasks
The time it takes to complete a task can be uncertain. Monte Carlo simulation can simulate various durations based on historical data – providing a clearer understanding of possible project durations and helping in setting more realistic deadlines.
External factors like weather, market conditions, or regulatory changes can impact a project. By incorporating these variables into simulations, project managers can evaluate their potential effects and devise contingency plans.
Risks are inherent in any project. Monte Carlo simulation enables the quantification of risks by assigning probabilities to various scenarios – allowing project managers to proactively address potential issues before they escalate.
Financial constraints can challenge project execution. Monte Carlo simulation can help project managers estimate budget variations – aiding in the allocation of financial resources and identification of budget buffers.
Effective communication is vital for project success. Using simulations, project managers can visually represent complex scenarios to stakeholders – enhancing communication and facilitating better decision-making.
In situations where multiple options or strategies are available, Monte Carlo simulation can aid in making informed decisions by quantifying the potential outcomes of each option, thus supporting strategic choices.
Monte Carlo simulation is a valuable tool that equips project managers with the ability to foresee and manage various challenges that can arise during a project's lifecycle. By incorporating uncertainty and variability into the planning process, project managers can make more informed decisions, set realistic expectations, and devise effective strategies for tackling unexpected issues.
As the realm of project management continues to evolve, embracing techniques like Monte Carlo simulation can significantly enhance the likelihood of project success in an increasingly complex business landscape.
While it’s clear that applying Monte Carlo simulation to your project schedules can help you plan contingencies and meet deadlines, the next step is implementation – and it’s easier than you might think!
Designed with project managers in mind, Lumivero’s ScheduleRiskAnalysis (included in DecisionTools Suite) analyzes schedule risk using Monte Carlo simulation with @RISK – all within Microsoft Excel. ScheduleRiskAnalysis lets you apply risk modeling on project files created in Microsoft Project and Primavera P6 while preserving the integrity of your original project model. This streamlined setup makes it easy to start applying Monte Carlo simulation to your project schedules right away.
Additionally, ScheduleRiskAnalysis features probabilistic Gantt charts that clearly display the likelihood of task durations and finish dates and critical indices for identifying the most important variables in your schedule to help incorporate robust risk analysis to your project schedules like never before.
With ScheduleRiskAnalysis, you can quickly identify new opportunities, avoid unseen schedule delays, understand critical factors, and clearly communicate risks to team members — solving project manager issues before they arise. Learn more about how Monte Carlo simulation can help improve your project management by requesting a free demo of ScheduleRiskAnalysis, included in DecisionTools Suite.
Learn about Dr. Bhattacharya’s qualitative research in the field of decolonizing and gain insights into balancing research, mentoring, and supervision.
In the realm of research, it's a well-known fact that qualitative research holds a distinct position as it is complex, theoretical, and abstract. As we delve deeper into this field, we realize the depth and the labyrinthine structure it offers. Our latest podcast episode provides you with a journey through this maze, guided by Dr. Kakali Bhattacharya, Professor at the School of Human Development and Organizational Studies and Education at the University of Florida.
Dr. Bhattacharya's work in the field of decolonization in qualitative research is not just impressive but also groundbreaking. Her passion was sparked by Ruth Behar's book, Translated Woman, which opened her eyes to a different dimension of qualitative research.
“She (Behar) was talking about how this process of translating somebody's stories translates the storyteller as well, and the stories that she was sharing were so powerful and also so reflective of the social conditions that affect certain groups of people in a certain kind of way that I was moved. I was in tears, and I didn't understand that that was research,” explained Dr. Bhattacharya.
As Dr. Bhattacharya started work on her dissertation, she was greatly influenced by Linda Tuhiwai-Smith's work on decolonizing methodologies and conversations with her colleague, Violet Johnson. After being encouraged to decolonize her mind and methodology, Dr. Bhattacharya dove into reading post-colonial scholars’ work. This research led Dr. Bhattacharya to the term “D/colonizing” to explain the complex movement of transnational diasporic groups between the present and utopic dreaming.
“I felt like decolonizing work is for a transnational diasporic group of people is always a shuttling. It's a shuttling between where we are now to where we might want to imagine, without being in any relationship with any colonizing structures, discourses, materiality, so that utopian dreaming as a decolonizing strategy and then current negotiation was how I was situating that idea. And so I slash the word and call it D-slash-colonizing,” said Dr. Bhattacharya.
Dr. Bhattacharya continues on to describe the complexities of the phrase D/colonizing.
“I've kind of looked at like what's in the slash, what's in this in between places, what's in the movement back and forth, because there is no more fluid and no more pure colonizing spaces and no more pure decolonizing spaces. So, we're always moving through multiple types of consciousness of our own colonization and our own resistance to it. So, I wanted to make it messier than like a clean, pure thing. So that's how I got into the D-slash-colonization,” said Dr. Bhattacharya.
As we delve further into the conversation, Dr. Bhattacharya shares the challenges and goals of qualitative research.
“Linda Tuhiwai-Smith first alerted me that research is an exploitative enterprise. It's a colonizing enterprise. It has history of doing some very bad things. You know, mostly from Western researchers, and in qualitative research particularly, there are some new movements that have started that I haven't been able to align myself with,” said Dr. Bhattacharya.
To ensure she conducts her research in an ethical way, Dr. Bhattacharya developed a guide for her research relationships.
“I really needed to have something that I could use, and it could also illuminate a path of ethics, of relationality with the people that I work with as participants or co-researchers,” said Dr. Bhattacharya. “I have argued that you posture to give up your will to know. … You're not owed anything. You enter the space with humility to learn what is being offered to you by folks, but you're not owed anything."
Continuing the topic of a guide for herself and others, Dr. Bhattacharya gives us a glimpse of her unique mentoring approach, which encourages students to center their identity while fearlessly breaking the traditional rules of dissertations. She has penned a book, Fundamentals of Qualitative Research: A Practical Guide, 2017, that serves as a profound resource for readers interested in applying qualitative research in their work.
“One of the things that I want to do when I mentor graduate students is to make sure that ... their voice is not silenced but, you know, sharpened, amplified and brought to bear on their work instead of telling them that they need to rigidize themselves into a certain academic voice,” said Dr. Bhattacharya.
Regarding her book, Dr. Bhattacharya explained that she wanted to give people who knew nothing about qualitative research a practical guide with exercises that, by the time they finished the book, they would have a decent understanding of how to meet qualitative research in their own work.
The final segment of our discussion focuses on Dr. Bhattacharya's personal journey juggling research, teaching, and supervising. She discusses actively working to improve her balance more recently due to an autoimmune disease and learning to tune in to her body and energy.
Dr. Bhattacharya describes giving all of her time to writing, researching, and helping students – occasionally to her detriment. Now, she has designed her classes in an efficient way that allows her to assist students while not burning out.
Throughout the episode, Dr. Bhattacharya stresses the importance of unlearning and relearning. She shares that her research has evolved over time and has taken a deeper dive into D/colonizing. She emphasizes the importance of centering one's identity, unlearning traditional research rules, and fostering relationships with research participants.
“First, figure out why are you drawn to this work. … Not to just say that because I want to save people, but really figure out like when was the first time you became interested in this? Why are you interested in this? And try to figure that out. Not for academic purposes, but for the purpose of your being. ... Don't chase currency, chase your purpose,” concluded Dr. Bhattacharya.
The beauty of this episode lies not just in the insights shared but also in the candid experiences and practical tips that Dr. Bhattacharya offers. She shares her journey, her struggles, and her triumphs – making it a truly enlightening exploration. Whether you are a student, a researcher, or a professional, this episode promises a deep understanding of qualitative research and its intricate nuances.
Embark on this journey with us and let Dr. Bhattacharya guide you through the intricate maze that is qualitative research in Episode 56: Chase Your Purpose, Not Currency.
Effortlessly improve your decision making through risk modeling and analysis with @RISK software – a powerful add-in tool for Microsoft Excel. By using the Monte Carlo simulation technique, @RISK computes and tracks all possible scenarios in your risk model, and the probability each will occur, to help you judge which risks to take and which to avoid.
@RISK continues to improve to meet your needs in this uncertain world – now featuring an export option for Gantt Charts, upgraded Time Series modeling, and improved markers display for clear data interpretation.
With this update, you can now:
Learn more about the new release below and update your @RISK software today to benefit from the new features.
Available for ScheduleRiskAnalysis with DecisionTools Suite
Clarify communication within your team to help make data-driven decisions with the new export option for Gantt Charts in ScheduleRiskAnalysis. You can now export Gantt charts to PDF files for any custom time range and data scale, preview the report, and create it in seconds – increasing your ability to communicate project schedules and timelines to team members, stakeholders, and clients.
Adapt your analysis to your data and its characteristics with the improved Time Series feature. With this new update, you can choose to synchronize the time series models you need to fit with the i-th value option.
Enhance visibility and ease of data interpretation by effortlessly highlighting key information from your charts. Markers are now automatically displayed horizontally on cumulative distribution function charts to facilitate quick insights and informed decisions.
This new version will give you access to all the new features mentioned above. The installation of our new version is recommended for all users.
If you have a valid @RISK license with access to maintenance and upgrades, you can download the new version for free by opening @RISK and checking for updates from the menu.
If you are currently using our trail version, you can purchase a license of @RISK to access these new features.
Never tried @RISK before? Download your free trial today!
As 2023 progresses, manufacturing companies continue to focus on supply chain management issues. In an April 2023 survey conducted by CNBC, only 36% of supply chain managers said they expected inventories to return to normal by year’s end. The New York Federal Reserve’s Global Supply Chain Pressure Index, which collates data from a range of indicators including air freight and shipping costs, began rising again in July after dipping to a historic low in May. With the price of materials and storage still fluctuating, optimizing production processes through reliability engineering can help manufacturers reduce cost pressures elsewhere.
Reliability engineering is a sub-discipline of systems engineering that lets manufacturers fine-tune the dependability of a production plant, process, or finished product, showing how failures or waste in production can affect costs or profits. Predictive analytics solutions that use Monte Carlo simulation can help manufacturers develop reliability engineering models that allow them to effectively manage risk.
Because Monte Carlo simulation can account for a wide range of variable factors, including random chance, it can be tailored to each manufacturer’s circumstances and needs, such as improving supply chain management through optimization of raw material usage or production capacity. This article looks at two examples of manufacturing companies that have made use of Lumivero’s @RISK and DecisionTools Suite to improve their reliability engineering and risk analysis with Monte Carlo simulation.
Trial runs for new manufacturing processes are a necessary step in research and development — a necessary step that can also be very costly. For Met-Mex Peñoles, cost is even more of a concern. The company is one of the world’s largest silver and zinc refiners and leads Latin America in the refining of gold and lead. Running refinement process trial runs with real precious metals could become very expensive very quickly.
Met-Mex Peñoles uses the Six Sigma Design of Experiments framework (DoE) to develop its trial runs. The DoE framework allows teams to investigate how processes can vary based on:
According to Met-Mex Peñoles technology manager Ignatio Quijas, Lumivero’s DecisionTools Suite has played a major role in simulating their Six Sigma DoE trial runs — particularly the Monte Carlo simulation tool @RISK. “Using @RISK to simulate changes in process design allows us to answer some difficult questions without actually running trials,” Quijas told Lumivero.
Using @RISK, his team can input historical data about everything from physical measurements and processing errors to manufacturing equipment tolerances and cost analyses. Plus, the TopRank tool and @RISK’s distribution-fitting feature makes it possible for Quijas to generate graphic representations of simulated trial outcomes that he can use to communicate findings to management teams. The result is less wasted precious metal in trials — and reduced costs for improving their processes.
Met-Mex Peñoles was able to develop its reliability engineering simulations based on historical data. However, what happens when manufacturers don’t have data to use as inputs for their predictive analytics tools? Hungarian risk analysis firm SzigmaSzervíz Ltd. developed a risk-quantification method to deal with just this situation. Their IntegRisk method integrates a scenario analysis process that helps generate input data to inform Monte Carlo simulation models.
Tasked with helping a client understand the potential profit impact of a new plant maintenance process, SzigmaSzervíz asked the project managers to prepare a schedule of activities that would be carried out during the maintenance work. These activities were modeled in Microsoft Project. Assuming no delays, the scheduled maintenance work would take 26 days. Then, managers and team leaders began the scenario analysis. They conducted a series of workshops in which they evaluated each maintenance activity for risk impacts — that is, for any event that could affect the projected duration of the maintenance period — based on their previous collective experience with other maintenance shutdowns.
Using data generated by the company’s scenario analyses, SzigmaSzervíz then ran Monte Carlo simulations with @RISK to generate a lognormal probability distribution. This probability model helped identify both the most likely and the most potentially damaging impact events — for example, a lack of engineering capacity to carry out electrical circuit maintenance activities that could extend maintenance completion time by as much as five days. Then, SzigmaSzervíz developed a tornado graph to show potential delays caused by the most likely impact events and determine what the overall delay to the maintenance timeline could be. The model showed a mean delay of 4.258 days to the original 26-day schedule.
Based on this outcome, SzigmaSzervíz was then able to develop another model that analyzed the potential profit loss that could result from a maintenance shutdown that lasted 30.258 days instead of 26 days. Their findings showed that this delay could cost the manufacturer as much as €5 million. The manufacturer was able to develop risk treatment plans that kept maintenance time close to projected deadlines, ultimately reducing profit loss.
Improve Your Reliability Engineering with Monte Carlo Simulation
Probabilistic analysis tools like DecisionTools Suite can provide manufacturers with a risk analysis process that can improve reliability engineering for production and maintenance processes. Our free example models for different scenarios can help you understand more about how Monte Carlo simulation can help your organization. Download the models today:
If you’re wondering ‘what is a literature review’ or trying to figure out how to write a literature review, you’ve come to the right place. While a literature review can be a summary of sources, it can also discuss published information in a variety of formats on a specific subject area and tends to have an organizational pattern that combines both a summary (a recap of the information) and a synthesis (a re-organization or the information).
The literature review for your article, thesis, or dissertation requires keeping track of sources, their important points, and their links to each other – for hundreds of journal articles, books, research papers, videos, scholarly articles, and other references. So, it’s no surprise grad students and researchers frequently struggle with how to write a literature review.
Many university guides on the subject recommend creating a synthesis matrix for keeping track of sources, ideas, and quotations. Traditionally, this matrix was often created as a Word document, and you’ll still find many templates available online. However, more and more academics now seem to be using spreadsheets instead.
This blog post will look into the advantages and disadvantages of using Excel and Word, explore the reasons for why researchers use spreadsheets, and discuss the benefits of using a specialized writing and reference management program like Citavi.
Proponents of the Excel approach are quick to tout the many benefits. First, there’s no need to pay for a new piece of software, since if you already have Microsoft Office installed on your computer, you also already have Excel. Otherwise, you can also use Google Sheets which has all the options you might need.
Then, there’s the simplicity and flexibility of using a spreadsheet. Set up time is pretty low. You simply create a few columns and can get started using your literature tracking system in a matter of minutes.
Another benefit is how easily customizable the solution is – you can make the categories be exactly what you want. Need a column to track the location of a study or a specific intervention? You just need to add it. Even though Excel can get complicated if you set up formulas or other customizations, for a literature review spreadsheet you usually can just use it as a simple table.
So far, the advantages listed apply to Word as well, but Excel and Citavi have one crucial advantage over Word: it lets you search, sort, and filter. Have a vague recollection of a note you wrote but only remember one term you used in it? Use Excel’s “Find” feature. Want to sort all your notes by year of publication of your source? Nothing could be easier than sorting your “year” column in ascending order. Want to find clinical trials with female participants with a statistically significant intervention? If you set up your Excel sheet as described below under “Version 2” such combinations of queries are possible, and in Citavi, setup is even easier as it lets you save sources directly into the program and organize your literature review outline in the knowledge organizer.
Citavi interface showing outline, sources, reference meta data, and an article PDF.
So, with all these advantages, how does the Excel method work in practice?
When you search for “Excel literature review”, Dr. Elaine Gregersen’s 2016 blog post “How I use Excel to manage my Literature Review” about her personal literature tracking system is one of the first results to pop up. It’s an approach that’s still often praised in discussion threads about Excel literature tracking methods. In her own words, it’s a simple approach, but that’s what makes it work. Her approach uses a literature review spreadsheet in addition to a reference manager. She uses one sheet only and includes columns for basic citation information, keywords, objectives, methods, and conclusions. In addition, she adds in four personalized categories: happy thoughts, unhappy thoughts, her own ethical concerns, and the author’s ethical concerns. These last two columns perfectly align with her field of Autoethnography. The happy thoughts column is for notes, such as how findings relate to her own work, while the unhappy thoughts column is for times when she disagrees with an author, among other uses.
Dr. Raul Pacheco uses a similar one-sheet method, which he calls the Conceptual Synthesis Excel Dump (CSED) technique since he tosses in any literature he might be using for analysis. His setup overlaps in some ways with Gregersen’s but has a few differences; he has columns for the concept (i.e. theme), citation, main idea, three columns for notes (which function similarly to Gregersen’s happy and unhappy thoughts), cross-references, quotes, and page numbers.
A useful tip is to create a dedicated column for quotations to help separate out the authors’ exact words from one’s analysis of them or the article as a whole. This can help you inadvertently misrepresent an author’s ideas as your own when you’re later writing your literature review.
Taking the models laid out by Gregersen and Pacheco as a jumping off point, it’s easy to make some tweaks for even better usability for your own projects. Obviously, you’ll want to create columns that fit your needs. Instead of a column “main theme” you might have several “key takeaways” columns. Or a highly-personal column for how each article relates to your own work. For example, you might include only the author names and year of publication for an article rather than the full citation (in which case we’d highly recommend saving the full details in a reference management program!). Some people might want to copy the abstract the authors provide, while some will choose to write their own summaries. You can add “notes” columns or distinguish between paraphrases, comments, and direct quotations. Beyond that there are a lot of other small things you can do to make your spreadsheet work better for you, such as linking from a citation to the actual PDF, adding comments to cells, or adding drop-down lists to make data entry easier.
If you struggle with organizing your notes and memos, you could benefit from a reference management software like Citavi. Citavi lets you make notes within the program and easily connects your notes, memos, and quotes to your sources – helping you keep track of all your thoughts and research.
In Citavi, see all your notes and comments about a source in one place.
If you want to take your basic Excel spreadsheet up a notch, you can do so in several ways. For one, you can make use of multiple sheets in the same workbook. Dr. Kathleen Clarke describes her method which involves a major spreadsheet for tracking all the high-level information about a source along with minor spreadsheets which are more granular. She describes her method as a mix between Gregersen’s and Pacheco’s, but she also includes additional sheets on different but related topics and for studies she wants to read later on. One other notable addition is the use of a numbering system for her sources which corresponds to the article file names on her computer.
While there’s a lot of freedom in how you set up your Excel files, there are still some best practices you’ll likely want to follow. First, you should set up your table so that headers are marked as such. This way they won’t be sorted along with the other cells if you sort the column by A-Z, for example. Also, you’ll want to apply word wrap formatting to cells to keep content from spilling over into neighboring empty cells. This just keeps everything looking a lot tidier and makes it easier to skim through. Another handy option recommended by McQuilliam is to set up endless scrolling which keeps your column headers visible, even when you’re adding entries at the bottom of your list.
The columns you include are more or less up to you, but you’ll need a column for source information for sure to avoid inadvertent plagiarism or having to hunt down sources later on. In addition, a year column is invaluable for sorting your literature chronologically in preparation for writing your lit review. To keep track of how authors build upon and discuss each other’s work, a cross-references column can also be helpful. It’s important to make it very clear which analysis and thoughts are your own and which are those of your author.
If you’re planning on using filter features later on to search by study type, keyword, or some other criteria you’ll need to use controlled vocabulary, i.e. each concept should be referred to by a single term rather than using a bunch of different synonyms. You can define this at the start in a key on a separate sheet of your Excel workbook so that you can easily refer to it as needed. Each time you decide to add new terms, just add them to your key.
To save time, a streamlined option for organizing and categorizing your source information, notes, and quotes is Citavi, and we’ll look further into the benefits of using Citavi at the end of this post.
It’s hard to argue with the advantages of ease, simplicity, and flexibility that the Excel method gives you. But, there are still some big downsides to consider.
First, you have to set everything up yourself – it’s not already set up for you in a way that should fit most workflows. If you try something and later decide to take a different approach, you may need to go back and add in additional information for many sources you already examined.
Although search, filtering, and sorting options in Excel are much better than they would be in a Word table, the program is still a spreadsheet at heart which means that it’s “flatter” than a database. In other words, it’s less relational which makes it difficult to create complex search strings to get a subset of items that fit multiple criteria or that use more complicated search techniques such as Boolean logic or wildcards.
Another drawback is that the Excel approach involves a lot of manual entry. While some amount of manual work will always be necessary, for example, when you type up your comments or key takeaways, you won’t be able to directly extract information from PDFs (such as direct quotes or images) without using an additional PDF reader. Moreover, there are no time-saving automation options for adding source information that you might be accustomed to from your reference manager.
Speaking of reference managers, in many of the Twitter discussions around the Excel note-taking approach, there will always be a few comments asking why the person didn’t consider using their referencing software for their notes. Many proponents of the Excel approach stress that they do indeed use a reference management program to keep track of their source information but that they prefer to keep their notes and analysis in a separate Excel file. One of the reasons is that even though many reference management programs let you group references into folders and tag them with specific terms, they don’t let you easily keep track of and categorize notes on a particular source. You basically get a single notes field and that’s it. No way to categorize, group, or tag the note itself, just the source as a whole.
While this is true for many reference manager programs, there’s one that goes above and beyond its competitors – Citavi! While we’ve explored how it’s possible to create a literature review with Excel and Word, it is not the most efficient way available. With Citavi, you can easily keep track of, categorize, and connect your sources – all in one place.
Citavi is a reference management program that has been designed with extensive knowledge organization for any number of sources in mind and may, in many cases, be a better alternative to the Excel method.
Citavi lets you automatically add source information for most journal articles. Then, you can read PDFs and save notes and memos directly in the program. Annotating in Citavi is as simple as how you would on paper as you can highlight sections of text in colors that indicate whether it’s an important section, a section you might want to cite, or a passage that you’d like to analyze more closely. The only difference from annotating on paper is that these notes – which can be summaries, indirect quotations, direct quotations and comments – are always linked directly to their location in the PDF, so if you ever have to look up the context for one of your own comments or a direct quotation again, one click takes you directly to where you need to go and makes it easy to create your annotated bibliography.
Page numbers are saved automatically, as long as the PDF metadata includes that information. Otherwise, you just need to enter a page number for an article with the first “knowledge item” you save for it. Citavi will then add all the rest automatically.
Citavi keeps track of your meta data so it’s easy to follow one of the hundreds of citation styles available in the program.
Although the knowledge item types are pre-defined, the many options will fit most needs, and you can also always use either the keywords, categories, or the core statement field to designate the type of note you are adding if you want more customization. Any terms you use can later be searched or used as filters (more on that below). In addition, for the reference as a whole you also have pre-defined fields for keywords, groups, evaluations, abstracts, notes, and cross-references. This lets you classify at both the reference and note level, so, if you want, you can assign different categories or keywords for a source as a whole and for a statement you find in it. If you need additional source fields, there are nine custom fields which you can rename and format with drop-down options.
Where Citavi really shines against Excel is in its search features and integration with Word and NVivo 14. You can create and save complex searches that combine or exclude certain terms, keywords, categories, note type, year, etc. You can make use of advanced search syntax, such as boolean operators, wildcards, and regular expressions. You can rate sources and filter by rating. And, you have full-text search across all of your PDFs.
You can also view project statistics at a glance or use an add-on to do an analysis by author or another criteria. With Citavi and NVivo 14 integration, you can go beyond reference management by creating a springboard to collect references and thoughts, analyze literature, and connect empirical data with NVivo’s analysis tools – helping you dig deeper into your research and speed up your publishing time.
But the best part is that all of this information can be taken directly over to Word. You have all the analysis and quotes you’ve saved in a panel at the left and can just click to insert what you need. Citavi will insert the correct citation formatting and add an entry to your bibliography at the end. If you added your notes to an outline in Citavi, you can use the “Chapter” view to focus on what you need for a particular section. And, if you ever need to double-check the context for a direct quotation or your own paraphrase, you can click a link symbol to jump back to the exact spot in the PDF that you referred to.
If you do need to at some point export your reference information in table format for an appendix in your dissertation (for example, as documentation of the exclusion process for a systematic review), doing so just requires a few clicks. If you’ve previously worked with Excel and want to try out Citavi, importing is just as easy, and you can of course import all of your existing notes as knowledge items.
Last but certainly not least, if you use Citavi, you have the benefit of working with one tool instead of needing to juggle an Excel spreadsheet, a reference management program, and a PDF annotation tool or PDF reader.
We think it’s a no-brainer to use Citavi instead of Excel or Google Sheets to keep track of your reading for a literature review – but then again, we might be ever so slightly biased. What do you think?
Learn more about Citavi or request a free 30-day trial today!