Ashland University wordmark

Archer Library

Quantitative research: literature review .

  • Archer Library This link opens in a new window
  • Locating Books
  • Library eBook Collections This link opens in a new window
  • A to Z Database List This link opens in a new window
  • Research & Statistics
  • Literature Review Resources
  • Citations & Reference

Exploring the literature review 

Literature review model: 6 steps.

literature review process

Adapted from The Literature Review , Machi & McEvoy (2009, p. 13).

Your Literature Review

Step 2: search, boolean search strategies, search limiters, ★ ebsco & google drive.

Right arrow

1. Select a Topic

"All research begins with curiosity" (Machi & McEvoy, 2009, p. 14)

Selection of a topic, and fully defined research interest and question, is supervised (and approved) by your professor. Tips for crafting your topic include:

  • Be specific. Take time to define your interest.
  • Topic Focus. Fully describe and sufficiently narrow the focus for research.
  • Academic Discipline. Learn more about your area of research & refine the scope.
  • Avoid Bias. Be aware of bias that you (as a researcher) may have.
  • Document your research. Use Google Docs to track your research process.
  • Research apps. Consider using Evernote or Zotero to track your research.

Consider Purpose

What will your topic and research address?

In The Literature Review: A Step-by-Step Guide for Students , Ridley presents that literature reviews serve several purposes (2008, p. 16-17).  Included are the following points:

  • Historical background for the research;
  • Overview of current field provided by "contemporary debates, issues, and questions;"
  • Theories and concepts related to your research;
  • Introduce "relevant terminology" - or academic language - being used it the field;
  • Connect to existing research - does your work "extend or challenge [this] or address a gap;" 
  • Provide "supporting evidence for a practical problem or issue" that your research addresses.

★ Schedule a research appointment

At this point in your literature review, take time to meet with a librarian. Why? Understanding the subject terminology used in databases can be challenging. Archer Librarians can help you structure a search, preparing you for step two. How? Contact a librarian directly or use the online form to schedule an appointment. Details are provided in the adjacent Schedule an Appointment box.

2. Search the Literature

Collect & Select Data: Preview, select, and organize

Archer Library is your go-to resource for this step in your literature review process. The literature search will include books and ebooks, scholarly and practitioner journals, theses and dissertations, and indexes. You may also choose to include web sites, blogs, open access resources, and newspapers. This library guide provides access to resources needed to complete a literature review.

Books & eBooks: Archer Library & OhioLINK

Databases: scholarly & practitioner journals.

Review the Library Databases tab on this library guide, it provides links to recommended databases for Education & Psychology, Business, and General & Social Sciences.

Expand your journal search; a complete listing of available AU Library and OhioLINK databases is available on the Databases  A to Z list . Search the database by subject, type, name, or do use the search box for a general title search. The A to Z list also includes open access resources and select internet sites.

Databases: Theses & Dissertations

Review the Library Databases tab on this guide, it includes Theses & Dissertation resources. AU library also has AU student authored theses and dissertations available in print, search the library catalog for these titles.

Did you know? If you are looking for particular chapters within a dissertation that is not fully available online, it is possible to submit an ILL article request . Do this instead of requesting the entire dissertation.

Newspapers:  Databases & Internet

Consider current literature in your academic field. AU Library's database collection includes The Chronicle of Higher Education and The Wall Street Journal .  The Internet Resources tab in this guide provides links to newspapers and online journals such as Inside Higher Ed , COABE Journal , and Education Week .

Database

Search Strategies & Boolean Operators

There are three basic boolean operators:  AND, OR, and NOT.

Used with your search terms, boolean operators will either expand or limit results. What purpose do they serve? They help to define the relationship between your search terms. For example, using the operator AND will combine the terms expanding the search. When searching some databases, and Google, the operator AND may be implied.

Overview of boolean terms

About the example: Boolean searches were conducted on November 4, 2019; result numbers may vary at a later date. No additional database limiters were set to further narrow search returns.

Database Search Limiters

Database strategies for targeted search results.

Most databases include limiters, or additional parameters, you may use to strategically focus search results.  EBSCO databases, such as Education Research Complete & Academic Search Complete provide options to:

  • Limit results to full text;
  • Limit results to scholarly journals, and reference available;
  • Select results source type to journals, magazines, conference papers, reviews, and newspapers
  • Publication date

Keep in mind that these tools are defined as limiters for a reason; adding them to a search will limit the number of results returned.  This can be a double-edged sword.  How? 

  • If limiting results to full-text only, you may miss an important piece of research that could change the direction of your research. Interlibrary loan is available to students, free of charge. Request articles that are not available in full-text; they will be sent to you via email.
  • If narrowing publication date, you may eliminate significant historical - or recent - research conducted on your topic.
  • Limiting resource type to a specific type of material may cause bias in the research results.

Use limiters with care. When starting a search, consider opting out of limiters until the initial literature screening is complete. The second or third time through your research may be the ideal time to focus on specific time periods or material (scholarly vs newspaper).

★ Truncating Search Terms

Expanding your search term at the root.

Truncating is often referred to as 'wildcard' searching. Databases may have their own specific wildcard elements however, the most commonly used are the asterisk (*) or question mark (?).  When used within your search. they will expand returned results.

Asterisk (*) Wildcard

Using the asterisk wildcard will return varied spellings of the truncated word. In the following example, the search term education was truncated after the letter "t."

Explore these database help pages for additional information on crafting search terms.

  • EBSCO Connect: Basic Searching with EBSCO
  • EBSCO Connect: Searching with Boolean Operators
  • EBSCO Connect: Searching with Wildcards and Truncation Symbols
  • ProQuest Help: Search Tips
  • ERIC: How does ERIC search work?

★ EBSCO Databases & Google Drive

Tips for saving research directly to Google drive.

Researching in an EBSCO database?

It is possible to save articles (PDF and HTML) and abstracts in EBSCOhost databases directly to Google drive. Select the Google Drive icon, authenticate using a Google account, and an EBSCO folder will be created in your account. This is a great option for managing your research. If documenting your research in a Google Doc, consider linking the information to actual articles saved in drive.

EBSCO Databases & Google Drive

EBSCOHost Databases & Google Drive: Managing your Research

This video features an overview of how to use Google Drive with EBSCO databases to help manage your research. It presents information for connecting an active Google account to EBSCO and steps needed to provide permission for EBSCO to manage a folder in Drive.

About the Video:  Closed captioning is available, select CC from the video menu.  If you need to review a specific area on the video, view on YouTube and expand the video description for access to topic time stamps.  A video transcript is provided below.

  • EBSCOhost Databases & Google Scholar

Defining Literature Review

What is a literature review.

A definition from the Online Dictionary for Library and Information Sciences .

A literature review is "a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works" (Reitz, 2014). 

A systemic review is "a literature review focused on a specific research question, which uses explicit methods to minimize bias in the identification, appraisal, selection, and synthesis of all the high-quality evidence pertinent to the question" (Reitz, 2014).

Recommended Reading

Cover Art

About this page

EBSCO Connect [Discovery and Search]. (2022). Searching with boolean operators. Retrieved May, 3, 2022 from https://connect.ebsco.com/s/?language=en_US

EBSCO Connect [Discover and Search]. (2022). Searching with wildcards and truncation symbols. Retrieved May 3, 2022; https://connect.ebsco.com/s/?language=en_US

Machi, L.A. & McEvoy, B.T. (2009). The literature review . Thousand Oaks, CA: Corwin Press: 

Reitz, J.M. (2014). Online dictionary for library and information science. ABC-CLIO, Libraries Unlimited . Retrieved from https://www.abc-clio.com/ODLIS/odlis_A.aspx

Ridley, D. (2008). The literature review: A step-by-step guide for students . Thousand Oaks, CA: Sage Publications, Inc.

Archer Librarians

Schedule an appointment.

Contact a librarian directly (email), or submit a request form. If you have worked with someone before, you can request them on the form.

  • ★ Archer Library Help • Online Reqest Form
  • Carrie Halquist • Reference & Instruction
  • Jessica Byers • Reference & Curation
  • Don Reams • Corrections Education & Reference
  • Diane Schrecker • Education & Head of the IRC
  • Tanaya Silcox • Technical Services & Business
  • Sarah Thomas • Acquisitions & ATS Librarian
  • << Previous: Research & Statistics
  • Next: Literature Review Resources >>
  • Last Updated: Nov 17, 2023 7:46 AM
  • URL: https://libguides.ashland.edu/quantitative

Archer Library • Ashland University © Copyright 2023. An Equal Opportunity/Equal Access Institution.

Review of Related Literature: Format, Example, & How to Make RRL

A review of related literature is a separate paper or a part of an article that collects and synthesizes discussion on a topic. Its purpose is to show the current state of research on the issue and highlight gaps in existing knowledge. A literature review can be included in a research paper or scholarly article, typically following the introduction and before the research methods section.

The picture provides introductory definition of a review of related literature.

This article will clarify the definition, significance, and structure of a review of related literature. You’ll also learn how to organize your literature review and discover ideas for an RRL in different subjects.

🔤 What Is RRL?

  • ❗ Significance of Literature Review
  • 🔎 How to Search for Literature
  • 🧩 Literature Review Structure
  • 📋 Format of RRL — APA, MLA, & Others
  • ✍️ How to Write an RRL
  • 📚 Examples of RRL

🔗 References

A review of related literature (RRL) is a part of the research report that examines significant studies, theories, and concepts published in scholarly sources on a particular topic. An RRL includes 3 main components:

  • A short overview and critique of the previous research.
  • Similarities and differences between past studies and the current one.
  • An explanation of the theoretical frameworks underpinning the research.

❗ Significance of Review of Related Literature

Although the goal of a review of related literature differs depending on the discipline and its intended use, its significance cannot be overstated. Here are some examples of how a review might be beneficial:

  • It helps determine knowledge gaps .
  • It saves from duplicating research that has already been conducted.
  • It provides an overview of various research areas within the discipline.
  • It demonstrates the researcher’s familiarity with the topic.

🔎 How to Perform a Literature Search

Including a description of your search strategy in the literature review section can significantly increase your grade. You can search sources with the following steps:

🧩 Literature Review Structure Example

The majority of literature reviews follow a standard introduction-body-conclusion structure. Let’s look at the RRL structure in detail.

This image shows the literature review structure.

Introduction of Review of Related Literature: Sample

An introduction should clarify the study topic and the depth of the information to be delivered. It should also explain the types of sources used. If your lit. review is part of a larger research proposal or project, you can combine its introductory paragraph with the introduction of your paper.

Here is a sample introduction to an RRL about cyberbullying:

Bullying has troubled people since the beginning of time. However, with modern technological advancements, especially social media, bullying has evolved into cyberbullying. As a result, nowadays, teenagers and adults cannot flee their bullies, which makes them feel lonely and helpless. This literature review will examine recent studies on cyberbullying.

Sample Review of Related Literature Thesis

A thesis statement should include the central idea of your literature review and the primary supporting elements you discovered in the literature. Thesis statements are typically put at the end of the introductory paragraph.

Look at a sample thesis of a review of related literature:

This literature review shows that scholars have recently covered the issues of bullies’ motivation, the impact of bullying on victims and aggressors, common cyberbullying techniques, and victims’ coping strategies. However, there is still no agreement on the best practices to address cyberbullying.

Literature Review Body Paragraph Example

The main body of a literature review should provide an overview of the existing research on the issue. Body paragraphs should not just summarize each source but analyze them. You can organize your paragraphs with these 3 elements:

  • Claim . Start with a topic sentence linked to your literature review purpose.
  • Evidence . Cite relevant information from your chosen sources.
  • Discussion . Explain how the cited data supports your claim.

Here’s a literature review body paragraph example:

Scholars have examined the link between the aggressor and the victim. Beran et al. (2007) state that students bullied online often become cyberbullies themselves. Faucher et al. (2014) confirm this with their findings: they discovered that male and female students began engaging in cyberbullying after being subject to bullying. Hence, one can conclude that being a victim of bullying increases one’s likelihood of becoming a cyberbully.

Review of Related Literature: Conclusion

A conclusion presents a general consensus on the topic. Depending on your literature review purpose, it might include the following:

  • Introduction to further research . If you write a literature review as part of a larger research project, you can present your research question in your conclusion .
  • Overview of theories . You can summarize critical theories and concepts to help your reader understand the topic better.
  • Discussion of the gap . If you identified a research gap in the reviewed literature, your conclusion could explain why that gap is significant.

Check out a conclusion example that discusses a research gap:

There is extensive research into bullies’ motivation, the consequences of bullying for victims and aggressors, strategies for bullying, and coping with it. Yet, scholars still have not reached a consensus on what to consider the best practices to combat cyberbullying. This question is of great importance because of the significant adverse effects of cyberbullying on victims and bullies.

📋 Format of RRL — APA, MLA, & Others

In this section, we will discuss how to format an RRL according to the most common citation styles: APA, Chicago, MLA, and Harvard.

Writing a literature review using the APA7 style requires the following text formatting:

  • When using APA in-text citations , include the author’s last name and the year of publication in parentheses.
  • For direct quotations , you must also add the page number. If you use sources without page numbers, such as websites or e-books, include a paragraph number instead.
  • When referring to the author’s name in a sentence , you do not need to repeat it at the end of the sentence. Instead, include the year of publication inside the parentheses after their name.
  • The reference list should be included at the end of your literature review. It is always alphabetized by the last name of the author (from A to Z), and the lines are indented one-half inch from the left margin of your paper. Do not forget to invert authors’ names (the last name should come first) and include the full titles of journals instead of their abbreviations. If you use an online source, add its URL.

The RRL format in the Chicago style is as follows:

  • Author-date . You place your citations in brackets within the text, indicating the name of the author and the year of publication.
  • Notes and bibliography . You place your citations in numbered footnotes or endnotes to connect the citation back to the source in the bibliography.
  • The reference list, or bibliography , in Chicago style, is at the end of a literature review. The sources are arranged alphabetically and single-spaced. Each bibliography entry begins with the author’s name and the source’s title, followed by publication information, such as the city of publication, the publisher, and the year of publication.

Writing a literature review using the MLA style requires the following text formatting:

  • In the MLA format, you can cite a source in the text by indicating the author’s last name and the page number in parentheses at the end of the citation. If the cited information takes several pages, you need to include all the page numbers.
  • The reference list in MLA style is titled “ Works Cited .” In this section, all sources used in the paper should be listed in alphabetical order. Each entry should contain the author, title of the source, title of the journal or a larger volume, other contributors, version, number, publisher, and publication date.

The Harvard style requires you to use the following text formatting for your RRL:

  • In-text citations in the Harvard style include the author’s last name and the year of publication. If you are using a direct quote in your literature review, you need to add the page number as well.
  • Arrange your list of references alphabetically. Each entry should contain the author’s last name, their initials, the year of publication, the title of the source, and other publication information, like the journal title and issue number or the publisher.

✍️ How to Write Review of Related Literature – Sample

Literature reviews can be organized in many ways depending on what you want to achieve with them. In this section, we will look at 3 examples of how you can write your RRL.

This image shows the organizational patterns of a literature review.

Thematic Literature Review

A thematic literature review is arranged around central themes or issues discussed in the sources. If you have identified some recurring themes in the literature, you can divide your RRL into sections that address various aspects of the topic. For example, if you examine studies on e-learning, you can distinguish such themes as the cost-effectiveness of online learning, the technologies used, and its effectiveness compared to traditional education.

Chronological Literature Review

A chronological literature review is a way to track the development of the topic over time. If you use this method, avoid merely listing and summarizing sources in chronological order. Instead, try to analyze the trends, turning moments, and critical debates that have shaped the field’s path. Also, you can give your interpretation of how and why specific advances occurred.

Methodological Literature Review

A methodological literature review differs from the preceding ones in that it usually doesn’t focus on the sources’ content. Instead, it is concerned with the research methods . So, if your references come from several disciplines or fields employing various research techniques, you can compare the findings and conclusions of different methodologies, for instance:

  • empirical vs. theoretical studies;
  • qualitative vs. quantitative research.

📚 Examples of Review of Related Literature and Studies

We have prepared a short example of RRL on climate change for you to see how everything works in practice!

Climate change is one of the most important issues nowadays. Based on a variety of facts, it is now clearer than ever that humans are altering the Earth's climate. The atmosphere and oceans have warmed, causing sea level rise, a significant loss of Arctic ice, and other climate-related changes. This literature review provides a thorough summary of research on climate change, focusing on climate change fingerprints and evidence of human influence on the Earth's climate system.

Physical Mechanisms and Evidence of Human Influence

Scientists are convinced that climate change is directly influenced by the emission of greenhouse gases. They have carefully analyzed various climate data and evidence, concluding that the majority of the observed global warming over the past 50 years cannot be explained by natural factors alone. Instead, there is compelling evidence pointing to a significant contribution of human activities, primarily the emission of greenhouse gases (Walker, 2014). For example, based on simple physics calculations, doubled carbon dioxide concentration in the atmosphere can lead to a global temperature increase of approximately 1 degree Celsius. (Elderfield, 2022). In order to determine the human influence on climate, scientists still have to analyze a lot of natural changes that affect temperature, precipitation, and other components of climate on timeframes ranging from days to decades and beyond.

Fingerprinting Climate Change

Fingerprinting climate change is a useful tool to identify the causes of global warming because different factors leave unique marks on climate records. This is evident when scientists look beyond overall temperature changes and examine how warming is distributed geographically and over time (Watson, 2022). By investigating these climate patterns, scientists can obtain a more complex understanding of the connections between natural climate variability and climate variability caused by human activity.

Modeling Climate Change and Feedback

To accurately predict the consequences of feedback mechanisms, the rate of warming, and regional climate change, scientists can employ sophisticated mathematical models of the atmosphere, ocean, land, and ice (the cryosphere). These models are grounded in well-established physical laws and incorporate the latest scientific understanding of climate-related processes (Shuckburgh, 2013). Although different climate models produce slightly varying projections for future warming, they all will agree that feedback mechanisms play a significant role in amplifying the initial warming caused by greenhouse gas emissions. (Meehl, 2019).

In conclusion, the literature on global warming indicates that there are well-understood physical processes that link variations in greenhouse gas concentrations to climate change. In addition, it covers the scientific proof that the rates of these gases in the atmosphere have increased and continue to rise fast. According to the sources, the majority of this recent change is almost definitely caused by greenhouse gas emissions produced by human activities. Citizens and governments can alter their energy production methods and consumption patterns to reduce greenhouse gas emissions and, thus, the magnitude of climate change. By acting now, society can prevent the worst consequences of climate change and build a more resilient and sustainable future for generations to come.

Have you ever struggled with finding the topic for an RRL in different subjects? Read the following paragraphs to get some ideas!

Nursing Literature Review Example

Many topics in the nursing field require research. For example, you can write a review of literature related to dengue fever . Give a general overview of dengue virus infections, including its clinical symptoms, diagnosis, prevention, and therapy.

Another good idea is to review related literature and studies about teenage pregnancy . This review can describe the effectiveness of specific programs for adolescent mothers and their children and summarize recommendations for preventing early pregnancy.

📝 Check out some more valuable examples below:

  • Hospital Readmissions: Literature Review .
  • Literature Review: Lower Sepsis Mortality Rates .
  • Breast Cancer: Literature Review .
  • Sexually Transmitted Diseases: Literature Review .
  • PICO for Pressure Ulcers: Literature Review .
  • COVID-19 Spread Prevention: Literature Review .
  • Chronic Obstructive Pulmonary Disease: Literature Review .
  • Hypertension Treatment Adherence: Literature Review .
  • Neonatal Sepsis Prevention: Literature Review .
  • Healthcare-Associated Infections: Literature Review .
  • Understaffing in Nursing: Literature Review .

Psychology Literature Review Example

If you look for an RRL topic in psychology , you can write a review of related literature about stress . Summarize scientific evidence about stress stages, side effects, types, or reduction strategies. Or you can write a review of related literature about computer game addiction . In this case, you may concentrate on the neural mechanisms underlying the internet gaming disorder, compare it to other addictions, or evaluate treatment strategies.

A review of related literature about cyberbullying is another interesting option. You can highlight the impact of cyberbullying on undergraduate students’ academic, social, and emotional development.

📝 Look at the examples that we have prepared for you to come up with some more ideas:

  • Mindfulness in Counseling: A Literature Review .
  • Team-Building Across Cultures: Literature Review .
  • Anxiety and Decision Making: Literature Review .
  • Literature Review on Depression .
  • Literature Review on Narcissism .
  • Effects of Depression Among Adolescents .
  • Causes and Effects of Anxiety in Children .

Literature Review — Sociology Example

Sociological research poses critical questions about social structures and phenomena. For example, you can write a review of related literature about child labor , exploring cultural beliefs and social norms that normalize the exploitation of children. Or you can create a review of related literature about social media . It can investigate the impact of social media on relationships between adolescents or the role of social networks on immigrants’ acculturation .

📝 You can find some more ideas below!

  • Single Mothers’ Experiences of Relationships with Their Adolescent Sons .
  • Teachers and Students’ Gender-Based Interactions .
  • Gender Identity: Biological Perspective and Social Cognitive Theory .
  • Gender: Culturally-Prescribed Role or Biological Sex .
  • The Influence of Opioid Misuse on Academic Achievement of Veteran Students .
  • The Importance of Ethics in Research .
  • The Role of Family and Social Network Support in Mental Health .

Education Literature Review Example

For your education studies , you can write a review of related literature about academic performance to determine factors that affect student achievement and highlight research gaps. One more idea is to create a review of related literature on study habits , considering their role in the student’s life and academic outcomes.

You can also evaluate a computerized grading system in a review of related literature to single out its advantages and barriers to implementation. Or you can complete a review of related literature on instructional materials to identify their most common types and effects on student achievement.

📝 Find some inspiration in the examples below:

  • Literature Review on Online Learning Challenges From COVID-19 .
  • Education, Leadership, and Management: Literature Review .
  • Literature Review: Standardized Testing Bias .
  • Bullying of Disabled Children in School .
  • Interventions and Letter & Sound Recognition: A Literature Review .
  • Social-Emotional Skills Program for Preschoolers .
  • Effectiveness of Educational Leadership Management Skills .

Business Research Literature Review

If you’re a business student, you can focus on customer satisfaction in your review of related literature. Discuss specific customer satisfaction features and how it is affected by service quality and prices. You can also create a theoretical literature review about consumer buying behavior to evaluate theories that have significantly contributed to understanding how consumers make purchasing decisions.

📝 Look at the examples to get more exciting ideas:

  • Leadership and Communication: Literature Review .
  • Human Resource Development: Literature Review .
  • Project Management. Literature Review .
  • Strategic HRM: A Literature Review .
  • Customer Relationship Management: Literature Review .
  • Literature Review on International Financial Reporting Standards .
  • Cultures of Management: Literature Review .

To conclude, a review of related literature is a significant genre of scholarly works that can be applied in various disciplines and for multiple goals. The sources examined in an RRL provide theoretical frameworks for future studies and help create original research questions and hypotheses.

When you finish your outstanding literature review, don’t forget to check whether it sounds logical and coherent. Our text-to-speech tool can help you with that!

  • Literature Reviews | University of North Carolina at Chapel Hill
  • Writing a Literature Review | Purdue Online Writing Lab
  • Learn How to Write a Review of Literature | University of Wisconsin-Madison
  • The Literature Review: A Few Tips on Conducting It | University of Toronto
  • Writing a Literature Review | UC San Diego
  • Conduct a Literature Review | The University of Arizona
  • Methods for Literature Reviews | National Library of Medicine
  • Literature Reviews: 5. Write the Review | Georgia State University

How to Write an Animal Testing Essay: Tips for Argumentative & Persuasive Papers

Descriptive essay topics: examples, outline, & more.

Educational resources and simple solutions for your research journey

how to write review of related literature in research

How to write review of related literature (RRL) in research

review of related literature in quantitative research example

A review of related literature (a.k.a RRL in research) is a comprehensive review of the existing literature pertaining to a specific topic or research question. An effective review provides the reader with an organized analysis and synthesis of the existing knowledge about a subject. With the increasing amount of new information being disseminated every day, conducting a review of related literature is becoming more difficult and the purpose of review of related literature is clearer than ever.  

All new knowledge is necessarily based on previously known information, and every new scientific study must be conducted and reported in the context of previous studies. This makes a review of related literature essential for research, and although it may be tedious work at times , most researchers will complete many such reviews of varying depths during their career. So, why exactly is a review of related literature important?    

Table of Contents

Why a review of related literature in research is important  

Before thinking how to do reviews of related literature , it is necessary to understand its importance. Although the purpose of a review of related literature varies depending on the discipline and how it will be used, its importance is never in question. Here are some ways in which a review can be crucial.  

  • Identify gaps in the knowledge – This is the primary purpose of a review of related literature (often called RRL in research ). To create new knowledge, you must first determine what knowledge may be missing. This also helps to identify the scope of your study.  
  • Avoid duplication of research efforts – Not only will a review of related literature indicate gaps in the existing research, but it will also lead you away from duplicating research that has already been done and thus save precious resources.  
  • Provide an overview of disparate and interdisciplinary research areas – Researchers cannot possibly know everything related to their disciplines. Therefore, it is very helpful to have access to a review of related literature already written and published.  
  • Highlight researcher’s familiarity with their topic 1  – A strong review of related literature in a study strengthens readers’ confidence in that study and that researcher.

review of related literature in quantitative research example

Tips on how to write a review of related literature in research

Given that you will probably need to produce a number of these at some point, here are a few general tips on how to write an effective review of related literature 2 .

  • Define your topic, audience, and purpose: You will be spending a lot of time with this review, so choose a topic that is interesting to you. While deciding what to write in a review of related literature , think about who you expect to read the review – researchers in your discipline, other scientists, the general public – and tailor the language to the audience. Also, think about the purpose of your review of related literature .  
  • Conduct a comprehensive literature search: While writing your review of related literature , emphasize more recent works but don’t forget to include some older publications as well. Cast a wide net, as you may find some interesting and relevant literature in unexpected databases or library corners. Don’t forget to search for recent conference papers.
  • Review the identified articles and take notes: It is a good idea to take notes in a way such that individual items in your notes can be moved around when you organize them. For example, index cards are great tools for this. Write each individual idea on a separate card along with the source. The cards can then be easily grouped and organized.  
  • Determine how to organize your review: A review of related literature should not be merely a listing of descriptions. It should be organized by some criterion, such as chronologically or thematically.  
  • Be critical and objective: Don’t just report the findings of other studies in your review of related literature . Challenge the methodology, find errors in the analysis, question the conclusions. Use what you find to improve your research. However, do not insert your opinions into the review of related literature. Remain objective and open-minded.  
  • Structure your review logically: Guide the reader through the information. The structure will depend on the function of the review of related literature. Creating an outline prior to writing the RRL in research is a good way to ensure the presented information flows well.  

As you read more extensively in your discipline, you will notice that the review of related literature appears in various forms in different places. For example, when you read an article about an experimental study, you will typically see a literature review or a RRL in research , in the introduction that includes brief descriptions of similar studies. In longer research studies and dissertations, especially in the social sciences, the review of related literature will typically be a separate chapter and include more information on methodologies and theory building. In addition, stand-alone review articles will be published that are extremely useful to researchers.  

The review of relevant literature or often abbreviated as, RRL in research , is an important communication tool that can be used in many forms for many purposes. It is a tool that all researchers should befriend.  

  • University of North Carolina at Chapel Hill Writing Center. Literature Reviews.  https://writingcenter.unc.edu/tips-and-tools/literature-reviews/  [Accessed September 8, 2022]
  • Pautasso M. Ten simple rules for writing a literature review. PLoS Comput Biol. 2013, 9. doi: 10.1371/journal.pcbi.1003149.

Q:  Is research complete without a review of related literature?

A research project is usually considered incomplete without a proper review of related literature. The review of related literature is a crucial component of any research project as it provides context for the research question, identifies gaps in existing literature, and ensures novelty by avoiding duplication. It also helps inform research design and supports arguments, highlights the significance of a study, and demonstrates your knowledge an expertise.

Q: What is difference between RRL and RRS?

The key difference between an RRL and an RRS lies in their focus and scope. An RRL or review of related literature examines a broad range of literature, including theoretical frameworks, concepts, and empirical studies, to establish the context and significance of the research topic. On the other hand, an RRS or review of research studies specifically focuses on analyzing and summarizing previous research studies within a specific research domain to gain insights into methodologies, findings, and gaps in the existing body of knowledge. While there may be some overlap between the two, they serve distinct purposes and cover different aspects of the research process.

Q: Does review of related literature improve accuracy and validity of research?

Yes, a comprehensive review of related literature (RRL) plays a vital role in improving the accuracy and validity of research. It helps authors gain a deeper understanding and offers different perspectives on the research topic. RRL can help you identify research gaps, dictate the selection of appropriate research methodologies, enhance theoretical frameworks, avoid biases and errors, and even provide support for research design and interpretation. By building upon and critically engaging with existing related literature, researchers can ensure their work is rigorous, reliable, and contributes meaningfully to their field of study.

Related Posts

review of related literature in quantitative research example

How R Discovery’s Publisher Channels Enhance User Experience; Drive Readership and Referrals for Top Publishers

research data management

Research Data Management: Tips for PhD Students and Researchers  

Banner Image

Research Process :: Step by Step

  • Introduction
  • Select Topic
  • Identify Keywords
  • Background Information
  • Develop Research Questions
  • Refine Topic
  • Search Strategy
  • Popular Databases
  • Evaluate Sources
  • Types of Periodicals
  • Reading Scholarly Articles
  • Primary & Secondary Sources
  • Organize / Take Notes
  • Writing & Grammar Resources
  • Annotated Bibliography
  • Literature Review
  • Citation Styles
  • Paraphrasing
  • Privacy / Confidentiality
  • Research Process
  • Selecting Your Topic
  • Identifying Keywords
  • Gathering Background Info
  • Evaluating Sources

review of related literature in quantitative research example

Organize the literature review into sections that present themes or identify trends, including relevant theory. You are not trying to list all the material published, but to synthesize and evaluate it according to the guiding concept of your thesis or research question.  

What is a literature review?

A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries

A literature review must do these things:

  • be organized around and related directly to the thesis or research question you are developing
  • synthesize results into a summary of what is and is not known
  • identify areas of controversy in the literature
  • formulate questions that need further research

Ask yourself questions like these:

  • What is the specific thesis, problem, or research question that my literature review helps to define?
  • What type of literature review am I conducting? Am I looking at issues of theory? methodology? policy? quantitative research (e.g. on the effectiveness of a new procedure)? qualitative research (e.g., studies of loneliness among migrant workers)?
  • What is the scope of my literature review? What types of publications am I using (e.g., journals, books, government documents, popular media)? What discipline am I working in (e.g., nursing psychology, sociology, medicine)?
  • How good was my information seeking? Has my search been wide enough to ensure I've found all the relevant material? Has it been narrow enough to exclude irrelevant material? Is the number of sources I've used appropriate for the length of my paper?
  • Have I critically analyzed the literature I use? Do I follow through a set of concepts and questions, comparing items to each other in the ways they deal with them? Instead of just listing and summarizing items, do I assess them, discussing strengths and weaknesses?
  • Have I cited and discussed studies contrary to my perspective?
  • Will the reader find my literature review relevant, appropriate, and useful?

Ask yourself questions like these about each book or article you include:

  • Has the author formulated a problem/issue?
  • Is it clearly defined? Is its significance (scope, severity, relevance) clearly established?
  • Could the problem have been approached more effectively from another perspective?
  • What is the author's research orientation (e.g., interpretive, critical science, combination)?
  • What is the author's theoretical framework (e.g., psychological, developmental, feminist)?
  • What is the relationship between the theoretical and research perspectives?
  • Has the author evaluated the literature relevant to the problem/issue? Does the author include literature taking positions she or he does not agree with?
  • In a research study, how good are the basic components of the study design (e.g., population, intervention, outcome)? How accurate and valid are the measurements? Is the analysis of the data accurate and relevant to the research question? Are the conclusions validly based upon the data and analysis?
  • In material written for a popular readership, does the author use appeals to emotion, one-sided examples, or rhetorically-charged language and tone? Is there an objective basis to the reasoning, or is the author merely "proving" what he or she already believes?
  • How does the author structure the argument? Can you "deconstruct" the flow of the argument to see whether or where it breaks down logically (e.g., in establishing cause-effect relationships)?
  • In what ways does this book or article contribute to our understanding of the problem under study, and in what ways is it useful for practice? What are the strengths and limitations?
  • How does this book or article relate to the specific thesis or question I am developing?

Text written by Dena Taylor, Health Sciences Writing Centre, University of Toronto

http://www.writing.utoronto.ca/advice/specific-types-of-writing/literature-review

  • << Previous: Annotated Bibliography
  • Next: Step 5: Cite Sources >>
  • Last Updated: Jan 20, 2024 4:26 PM
  • URL: https://libguides.uta.edu/researchprocess

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.
  • Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

49k Accesses

26 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

review of related literature in quantitative research example

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Make a Literature Review in Research (RRL Example)

review of related literature in quantitative research example

What is an RRL in a research paper?

A relevant review of the literature (RRL) is an objective, concise, critical summary of published research literature relevant to a topic being researched in an article. In an RRL, you discuss knowledge and findings from existing literature relevant to your study topic. If there are conflicts or gaps in existing literature, you can also discuss these in your review, as well as how you will confront these missing elements or resolve these issues in your study.

To complete an RRL, you first need to collect relevant literature; this can include online and offline sources. Save all of your applicable resources as you will need to include them in your paper. When looking through these sources, take notes and identify concepts of each source to describe in the review of the literature.

A good RRL does NOT:

A literature review does not simply reference and list all of the material you have cited in your paper.

  • Presenting material that is not directly relevant to your study will distract and frustrate the reader and make them lose sight of the purpose of your study.
  • Starting a literature review with “A number of scholars have studied the relationship between X and Y” and simply listing who has studied the topic and what each scholar concluded is not going to strengthen your paper.

A good RRL DOES:

  • Present a brief typology that orders articles and books into groups to help readers focus on unresolved debates, inconsistencies, tensions, and new questions about a research topic.
  • Summarize the most relevant and important aspects of the scientific literature related to your area of research
  • Synthesize what has been done in this area of research and by whom, highlight what previous research indicates about a topic, and identify potential gaps and areas of disagreement in the field
  • Give the reader an understanding of the background of the field and show which studies are important—and highlight errors in previous studies

How long is a review of the literature for a research paper?

The length of a review of the literature depends on its purpose and target readership and can vary significantly in scope and depth. In a dissertation, thesis, or standalone review of literature, it is usually a full chapter of the text (at least 20 pages). Whereas, a standard research article or school assignment literature review section could only be a few paragraphs in the Introduction section .

Building Your Literature Review Bookshelf

One way to conceive of a literature review is to think about writing it as you would build a bookshelf. You don’t need to cut each piece by yourself from scratch. Rather, you can take the pieces that other researchers have cut out and put them together to build a framework on which to hang your own “books”—that is, your own study methods, results, and conclusions.

literature review bookshelf

What Makes a Good Literature Review?

The contents of a literature review (RRL) are determined by many factors, including its precise purpose in the article, the degree of consensus with a given theory or tension between competing theories, the length of the article, the number of previous studies existing in the given field, etc. The following are some of the most important elements that a literature review provides.

Historical background for your research

Analyze what has been written about your field of research to highlight what is new and significant in your study—or how the analysis itself contributes to the understanding of this field, even in a small way. Providing a historical background also demonstrates to other researchers and journal editors your competency in discussing theoretical concepts. You should also make sure to understand how to paraphrase scientific literature to avoid plagiarism in your work.

The current context of your research

Discuss central (or peripheral) questions, issues, and debates in the field. Because a field is constantly being updated by new work, you can show where your research fits into this context and explain developments and trends in research.

A discussion of relevant theories and concepts

Theories and concepts should provide the foundation for your research. For example, if you are researching the relationship between ecological environments and human populations, provide models and theories that focus on specific aspects of this connection to contextualize your study. If your study asks a question concerning sustainability, mention a theory or model that underpins this concept. If it concerns invasive species, choose material that is focused in this direction.

Definitions of relevant terminology

In the natural sciences, the meaning of terms is relatively straightforward and consistent. But if you present a term that is obscure or context-specific, you should define the meaning of the term in the Introduction section (if you are introducing a study) or in the summary of the literature being reviewed.

Description of related relevant research

Include a description of related research that shows how your work expands or challenges earlier studies or fills in gaps in previous work. You can use your literature review as evidence of what works, what doesn’t, and what is missing in the field.

Supporting evidence for a practical problem or issue your research is addressing that demonstrates its importance: Referencing related research establishes your area of research as reputable and shows you are building upon previous work that other researchers have deemed significant.

Types of Literature Reviews

Literature reviews can differ in structure, length, amount, and breadth of content included. They can range from selective (a very narrow area of research or only a single work) to comprehensive (a larger amount or range of works). They can also be part of a larger work or stand on their own.

types of literature reviews

  • A course assignment is an example of a selective, stand-alone work. It focuses on a small segment of the literature on a topic and makes up an entire work on its own.
  • The literature review in a dissertation or thesis is both comprehensive and helps make up a larger work.
  • A majority of journal articles start with a selective literature review to provide context for the research reported in the study; such a literature review is usually included in the Introduction section (but it can also follow the presentation of the results in the Discussion section ).
  • Some literature reviews are both comprehensive and stand as a separate work—in this case, the entire article analyzes the literature on a given topic.

Literature Reviews Found in Academic Journals

The two types of literature reviews commonly found in journals are those introducing research articles (studies and surveys) and stand-alone literature analyses. They can differ in their scope, length, and specific purpose.

Literature reviews introducing research articles

The literature review found at the beginning of a journal article is used to introduce research related to the specific study and is found in the Introduction section, usually near the end. It is shorter than a stand-alone review because it must be limited to very specific studies and theories that are directly relevant to the current study. Its purpose is to set research precedence and provide support for the study’s theory, methods, results, and/or conclusions. Not all research articles contain an explicit review of the literature, but most do, whether it is a discrete section or indistinguishable from the rest of the Introduction.

How to structure a literature review for an article

When writing a literature review as part of an introduction to a study, simply follow the structure of the Introduction and move from the general to the specific—presenting the broadest background information about a topic first and then moving to specific studies that support your rationale , finally leading to your hypothesis statement. Such a literature review is often indistinguishable from the Introduction itself—the literature is INTRODUCING the background and defining the gaps your study aims to fill.

The stand-alone literature review

The literature review published as a stand-alone article presents and analyzes as many of the important publications in an area of study as possible to provide background information and context for a current area of research or a study. Stand-alone reviews are an excellent resource for researchers when they are first searching for the most relevant information on an area of study.

Such literature reviews are generally a bit broader in scope and can extend further back in time. This means that sometimes a scientific literature review can be highly theoretical, in addition to focusing on specific methods and outcomes of previous studies. In addition, all sections of such a “review article” refer to existing literature rather than describing the results of the authors’ own study.

In addition, this type of literature review is usually much longer than the literature review introducing a study. At the end of the review follows a conclusion that once again explicitly ties all of the cited works together to show how this analysis is itself a contribution to the literature. While not absolutely necessary, such articles often include the terms “Literature Review” or “Review of the Literature” in the title. Whether or not that is necessary or appropriate can also depend on the specific author instructions of the target journal. Have a look at this article for more input on how to compile a stand-alone review article that is insightful and helpful for other researchers in your field.

literature review examples

How to Write a Literature Review in 6 Steps

So how do authors turn a network of articles into a coherent review of relevant literature?

Writing a literature review is not usually a linear process—authors often go back and check the literature while reformulating their ideas or making adjustments to their study. Sometimes new findings are published before a study is completed and need to be incorporated into the current work. This also means you will not be writing the literature review at any one time, but constantly working on it before, during, and after your study is complete.

Here are some steps that will help you begin and follow through on your literature review.

Step 1: Choose a topic to write about—focus on and explore this topic.

Choose a topic that you are familiar with and highly interested in analyzing; a topic your intended readers and researchers will find interesting and useful; and a topic that is current, well-established in the field, and about which there has been sufficient research conducted for a review. This will help you find the “sweet spot” for what to focus on.

Step 2: Research and collect all the scholarly information on the topic that might be pertinent to your study.

This includes scholarly articles, books, conventions, conferences, dissertations, and theses—these and any other academic work related to your area of study is called “the literature.”

Step 3: Analyze the network of information that extends or responds to the major works in your area; select the material that is most useful.

Use thought maps and charts to identify intersections in the research and to outline important categories; select the material that will be most useful to your review.

Step 4: Describe and summarize each article—provide the essential information of the article that pertains to your study.

Determine 2-3 important concepts (depending on the length of your article) that are discussed in the literature; take notes about all of the important aspects of this study relevant to the topic being reviewed.

For example, in a given study, perhaps some of the main concepts are X, Y, and Z. Note these concepts and then write a brief summary about how the article incorporates them. In reviews that introduce a study, these can be relatively short. In stand-alone reviews, there may be significantly more texts and more concepts.

Step 5: Demonstrate how these concepts in the literature relate to what you discovered in your study or how the literature connects the concepts or topics being discussed.

In a literature review intro for an article, this information might include a summary of the results or methods of previous studies that correspond to and/or confirm those sections in your own study. For a stand-alone literature review, this may mean highlighting the concepts in each article and showing how they strengthen a hypothesis or show a pattern.

Discuss unaddressed issues in previous studies. These studies that are missing something you address are important to include in your literature review. In addition, those works whose theories and conclusions directly support your findings will be valuable to review here.

Step 6: Identify relationships in the literature and develop and connect your own ideas to them.

This is essentially the same as step 5 but focused on the connections between the literature and the current study or guiding concepts or arguments of the paper, not only on the connections between the works themselves.

Your hypothesis, argument, or guiding concept is the “golden thread” that will ultimately tie the works together and provide readers with specific insights they didn’t have before reading your literature review. Make sure you know where to put the research question , hypothesis, or statement of the problem in your research paper so that you guide your readers logically and naturally from your introduction of earlier work and evidence to the conclusions you want them to draw from the bigger picture.

Your literature review will not only cover publications on your topics but will include your own ideas and contributions. By following these steps you will be telling the specific story that sets the background and shows the significance of your research and you can turn a network of related works into a focused review of the literature.

Literature Review (RRL) Examples

Because creating sample literature reviews would take too long and not properly capture the nuances and detailed information needed for a good review, we have included some links to different types of literature reviews below. You can find links to more literature reviews in these categories by visiting the TUS Library’s website . Sample literature reviews as part of an article, dissertation, or thesis:

  • Critical Thinking and Transferability: A Review of the Literature (Gwendolyn Reece)
  • Building Customer Loyalty: A Customer Experience Based Approach in a Tourism Context (Martina Donnelly)

Sample stand-alone literature reviews

  • Literature Review on Attitudes towards Disability (National Disability Authority)
  • The Effects of Communication Styles on Marital Satisfaction (Hannah Yager)

Additional Literature Review Format Guidelines

In addition to the content guidelines above, authors also need to check which style guidelines to use ( APA , Chicago, MLA, etc.) and what specific rules the target journal might have for how to structure such articles or how many studies to include—such information can usually be found on the journals’ “Guide for Authors” pages. Additionally, use one of the four Wordvice citation generators below, choosing the citation style needed for your paper:

Wordvice Writing and Academic Editing Resources

Finally, after you have finished drafting your literature review, be sure to receive professional proofreading services , including paper editing for your academic work. A competent proofreader who understands academic writing conventions and the specific style guides used by academic journals will ensure that your paper is ready for publication in your target journal.

See our academic resources for further advice on references in your paper , how to write an abstract , how to write a research paper title, how to impress the editor of your target journal with a perfect cover letter , and dozens of other research writing and publication topics.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

  • UWF Libraries

Literature Review: Conducting & Writing

  • Sample Literature Reviews
  • Steps for Conducting a Lit Review
  • Finding "The Literature"
  • Organizing/Writing
  • Chicago: Notes Bibliography

Sample Lit Reviews from Communication Arts

Have an exemplary literature review.

  • Literature Review Sample 1
  • Literature Review Sample 2
  • Literature Review Sample 3

Have you written a stellar literature review you care to share for teaching purposes?

Are you an instructor who has received an exemplary literature review and have permission from the student to post?

Please contact Britt McGowan at [email protected] for inclusion in this guide. All disciplines welcome and encouraged.

  • << Previous: MLA Style
  • Next: Get Help! >>
  • Last Updated: Jan 18, 2024 9:21 AM
  • URL: https://libguides.uwf.edu/litreview

Home

  • 754.7k views
  • Literature Search

Q: How do I do a review of related literature (RRL)?

How do I do the synthesis? Also, where can I get samples of RRLs?

Asked on 04 Jan, 2020

A review of related literature (RRL) is a detailed review of existing literature related to the topic of a thesis or dissertation. In an RRL, you talk about knowledge and findings from existing literature relevant to your topic. If you find gaps or conflicts in existing literature, you can also discuss these in your review, and if applicable, how you plan to address these gaps or resolve these conflicts through your study.

To undertake an RRL, therefore, you first need to identify relevant literature. You can do this through various sources, online and offline. Ensure you are saving all applicable resources because you will need to mention them in your paper. When going through the resources, make notes and identify key concepts of each resource to describe in the review.

Before starting the review, determine how you want to organize the review, that is, whether you wish to discuss the resources by themes, dates, extent of relevance, and so on.

When writing the review, begin by providing the background and purpose of the review. Then, begin discussing each of the identified resources according to the way you decided to organize them. For each, you can mention the title, author, publication, and date before describing the key concept and points. You may decide to list sections and sub-sections as in this sample or keep it more free-flowing as in this sample . [Note: In case any of these links don’t open, you may need to register yourself on the respective site(s).]

Finally, in the synthesis, you explain how the various concepts of each resource link with each other. You may decide to do this through a table or matrix, as illustrated here .

Related reading :

  • How to write the literature review of your research paper
  • Tips for effective literature searching and keeping up with new publications
  • Make your Google searches more precise: A few tips for researchers

avatar mx-auto white

Answered by Editage Insights on 21 Jan, 2020

  • Upvote this Answer

review of related literature in quantitative research example

The Relationship between collaboration teachers and grade 11 student achievement

avatar mx-auto white

Answered by Jenny Torres on 05 Nov, 2023

assessing the contributions of ICT students to economic development study

avatar mx-auto white

Answered by senji centesca on 10 Dec, 2023

This content belongs to the Conducting Research Stage

Confirm that you would also like to sign up for free personalized email coaching for this stage.

Trending Searches

  • Statement of the problem
  • Background of study
  • Scope of the study
  • Types of qualitative research
  • Rationale of the study
  • Concept paper
  • Literature review
  • Introduction in research
  • Under "Editor Evaluation"
  • Ethics in research

Recent Searches

  • Review paper
  • Responding to reviewer comments
  • Predatory publishers
  • Scope and delimitations
  • Open access
  • Plagiarism in research
  • Journal selection tips
  • Editor assigned
  • Types of articles
  • "Reject and Resubmit" status
  • Decision in process
  • Conflict of interest

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • CBE Life Sci Educ
  • v.21(3); Fall 2022

Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks: An Introduction for New Biology Education Researchers

Julie a. luft.

† Department of Mathematics, Social Studies, and Science Education, Mary Frances Early College of Education, University of Georgia, Athens, GA 30602-7124

Sophia Jeong

‡ Department of Teaching & Learning, College of Education & Human Ecology, Ohio State University, Columbus, OH 43210

Robert Idsardi

§ Department of Biology, Eastern Washington University, Cheney, WA 99004

Grant Gardner

∥ Department of Biology, Middle Tennessee State University, Murfreesboro, TN 37132

Associated Data

To frame their work, biology education researchers need to consider the role of literature reviews, theoretical frameworks, and conceptual frameworks as critical elements of the research and writing process. However, these elements can be confusing for scholars new to education research. This Research Methods article is designed to provide an overview of each of these elements and delineate the purpose of each in the educational research process. We describe what biology education researchers should consider as they conduct literature reviews, identify theoretical frameworks, and construct conceptual frameworks. Clarifying these different components of educational research studies can be helpful to new biology education researchers and the biology education research community at large in situating their work in the broader scholarly literature.

INTRODUCTION

Discipline-based education research (DBER) involves the purposeful and situated study of teaching and learning in specific disciplinary areas ( Singer et al. , 2012 ). Studies in DBER are guided by research questions that reflect disciplines’ priorities and worldviews. Researchers can use quantitative data, qualitative data, or both to answer these research questions through a variety of methodological traditions. Across all methodologies, there are different methods associated with planning and conducting educational research studies that include the use of surveys, interviews, observations, artifacts, or instruments. Ensuring the coherence of these elements to the discipline’s perspective also involves situating the work in the broader scholarly literature. The tools for doing this include literature reviews, theoretical frameworks, and conceptual frameworks. However, the purpose and function of each of these elements is often confusing to new education researchers. The goal of this article is to introduce new biology education researchers to these three important elements important in DBER scholarship and the broader educational literature.

The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. Literature reviews situate the relevance of the study within a topic and a field. The process may seem familiar to science researchers entering DBER fields, but new researchers may still struggle in conducting the review. Booth et al. (2016b) highlight some of the challenges novice education researchers face when conducting a review of literature. They point out that novice researchers struggle in deciding how to focus the review, determining the scope of articles needed in the review, and knowing how to be critical of the articles in the review. Overcoming these challenges (and others) can help novice researchers construct a sound literature review that can inform the design of the study and help ensure the work makes a contribution to the field.

The second and third highlighted elements are theoretical and conceptual frameworks. These guide biology education research (BER) studies, and may be less familiar to science researchers. These elements are important in shaping the construction of new knowledge. Theoretical frameworks offer a way to explain and interpret the studied phenomenon, while conceptual frameworks clarify assumptions about the studied phenomenon. Despite the importance of these constructs in educational research, biology educational researchers have noted the limited use of theoretical or conceptual frameworks in published work ( DeHaan, 2011 ; Dirks, 2011 ; Lo et al. , 2019 ). In reviewing articles published in CBE—Life Sciences Education ( LSE ) between 2015 and 2019, we found that fewer than 25% of the research articles had a theoretical or conceptual framework (see the Supplemental Information), and at times there was an inconsistent use of theoretical and conceptual frameworks. Clearly, these frameworks are challenging for published biology education researchers, which suggests the importance of providing some initial guidance to new biology education researchers.

Fortunately, educational researchers have increased their explicit use of these frameworks over time, and this is influencing educational research in science, technology, engineering, and mathematics (STEM) fields. For instance, a quick search for theoretical or conceptual frameworks in the abstracts of articles in Educational Research Complete (a common database for educational research) in STEM fields demonstrates a dramatic change over the last 20 years: from only 778 articles published between 2000 and 2010 to 5703 articles published between 2010 and 2020, a more than sevenfold increase. Greater recognition of the importance of these frameworks is contributing to DBER authors being more explicit about such frameworks in their studies.

Collectively, literature reviews, theoretical frameworks, and conceptual frameworks work to guide methodological decisions and the elucidation of important findings. Each offers a different perspective on the problem of study and is an essential element in all forms of educational research. As new researchers seek to learn about these elements, they will find different resources, a variety of perspectives, and many suggestions about the construction and use of these elements. The wide range of available information can overwhelm the new researcher who just wants to learn the distinction between these elements or how to craft them adequately.

Our goal in writing this paper is not to offer specific advice about how to write these sections in scholarly work. Instead, we wanted to introduce these elements to those who are new to BER and who are interested in better distinguishing one from the other. In this paper, we share the purpose of each element in BER scholarship, along with important points on its construction. We also provide references for additional resources that may be beneficial to better understanding each element. Table 1 summarizes the key distinctions among these elements.

Comparison of literature reviews, theoretical frameworks, and conceptual reviews

This article is written for the new biology education researcher who is just learning about these different elements or for scientists looking to become more involved in BER. It is a result of our own work as science education and biology education researchers, whether as graduate students and postdoctoral scholars or newly hired and established faculty members. This is the article we wish had been available as we started to learn about these elements or discussed them with new educational researchers in biology.

LITERATURE REVIEWS

Purpose of a literature review.

A literature review is foundational to any research study in education or science. In education, a well-conceptualized and well-executed review provides a summary of the research that has already been done on a specific topic and identifies questions that remain to be answered, thus illustrating the current research project’s potential contribution to the field and the reasoning behind the methodological approach selected for the study ( Maxwell, 2012 ). BER is an evolving disciplinary area that is redefining areas of conceptual emphasis as well as orientations toward teaching and learning (e.g., Labov et al. , 2010 ; American Association for the Advancement of Science, 2011 ; Nehm, 2019 ). As a result, building comprehensive, critical, purposeful, and concise literature reviews can be a challenge for new biology education researchers.

Building Literature Reviews

There are different ways to approach and construct a literature review. Booth et al. (2016a) provide an overview that includes, for example, scoping reviews, which are focused only on notable studies and use a basic method of analysis, and integrative reviews, which are the result of exhaustive literature searches across different genres. Underlying each of these different review processes are attention to the s earch process, a ppraisa l of articles, s ynthesis of the literature, and a nalysis: SALSA ( Booth et al. , 2016a ). This useful acronym can help the researcher focus on the process while building a specific type of review.

However, new educational researchers often have questions about literature reviews that are foundational to SALSA or other approaches. Common questions concern determining which literature pertains to the topic of study or the role of the literature review in the design of the study. This section addresses such questions broadly while providing general guidance for writing a narrative literature review that evaluates the most pertinent studies.

The literature review process should begin before the research is conducted. As Boote and Beile (2005 , p. 3) suggested, researchers should be “scholars before researchers.” They point out that having a good working knowledge of the proposed topic helps illuminate avenues of study. Some subject areas have a deep body of work to read and reflect upon, providing a strong foundation for developing the research question(s). For instance, the teaching and learning of evolution is an area of long-standing interest in the BER community, generating many studies (e.g., Perry et al. , 2008 ; Barnes and Brownell, 2016 ) and reviews of research (e.g., Sickel and Friedrichsen, 2013 ; Ziadie and Andrews, 2018 ). Emerging areas of BER include the affective domain, issues of transfer, and metacognition ( Singer et al. , 2012 ). Many studies in these areas are transdisciplinary and not always specific to biology education (e.g., Rodrigo-Peiris et al. , 2018 ; Kolpikova et al. , 2019 ). These newer areas may require reading outside BER; fortunately, summaries of some of these topics can be found in the Current Insights section of the LSE website.

In focusing on a specific problem within a broader research strand, a new researcher will likely need to examine research outside BER. Depending upon the area of study, the expanded reading list might involve a mix of BER, DBER, and educational research studies. Determining the scope of the reading is not always straightforward. A simple way to focus one’s reading is to create a “summary phrase” or “research nugget,” which is a very brief descriptive statement about the study. It should focus on the essence of the study, for example, “first-year nonmajor students’ understanding of evolution,” “metacognitive prompts to enhance learning during biochemistry,” or “instructors’ inquiry-based instructional practices after professional development programming.” This type of phrase should help a new researcher identify two or more areas to review that pertain to the study. Focusing on recent research in the last 5 years is a good first step. Additional studies can be identified by reading relevant works referenced in those articles. It is also important to read seminal studies that are more than 5 years old. Reading a range of studies should give the researcher the necessary command of the subject in order to suggest a research question.

Given that the research question(s) arise from the literature review, the review should also substantiate the selected methodological approach. The review and research question(s) guide the researcher in determining how to collect and analyze data. Often the methodological approach used in a study is selected to contribute knowledge that expands upon what has been published previously about the topic (see Institute of Education Sciences and National Science Foundation, 2013 ). An emerging topic of study may need an exploratory approach that allows for a description of the phenomenon and development of a potential theory. This could, but not necessarily, require a methodological approach that uses interviews, observations, surveys, or other instruments. An extensively studied topic may call for the additional understanding of specific factors or variables; this type of study would be well suited to a verification or a causal research design. These could entail a methodological approach that uses valid and reliable instruments, observations, or interviews to determine an effect in the studied event. In either of these examples, the researcher(s) may use a qualitative, quantitative, or mixed methods methodological approach.

Even with a good research question, there is still more reading to be done. The complexity and focus of the research question dictates the depth and breadth of the literature to be examined. Questions that connect multiple topics can require broad literature reviews. For instance, a study that explores the impact of a biology faculty learning community on the inquiry instruction of faculty could have the following review areas: learning communities among biology faculty, inquiry instruction among biology faculty, and inquiry instruction among biology faculty as a result of professional learning. Biology education researchers need to consider whether their literature review requires studies from different disciplines within or outside DBER. For the example given, it would be fruitful to look at research focused on learning communities with faculty in STEM fields or in general education fields that result in instructional change. It is important not to be too narrow or too broad when reading. When the conclusions of articles start to sound similar or no new insights are gained, the researcher likely has a good foundation for a literature review. This level of reading should allow the researcher to demonstrate a mastery in understanding the researched topic, explain the suitability of the proposed research approach, and point to the need for the refined research question(s).

The literature review should include the researcher’s evaluation and critique of the selected studies. A researcher may have a large collection of studies, but not all of the studies will follow standards important in the reporting of empirical work in the social sciences. The American Educational Research Association ( Duran et al. , 2006 ), for example, offers a general discussion about standards for such work: an adequate review of research informing the study, the existence of sound and appropriate data collection and analysis methods, and appropriate conclusions that do not overstep or underexplore the analyzed data. The Institute of Education Sciences and National Science Foundation (2013) also offer Common Guidelines for Education Research and Development that can be used to evaluate collected studies.

Because not all journals adhere to such standards, it is important that a researcher review each study to determine the quality of published research, per the guidelines suggested earlier. In some instances, the research may be fatally flawed. Examples of such flaws include data that do not pertain to the question, a lack of discussion about the data collection, poorly constructed instruments, or an inadequate analysis. These types of errors result in studies that are incomplete, error-laden, or inaccurate and should be excluded from the review. Most studies have limitations, and the author(s) often make them explicit. For instance, there may be an instructor effect, recognized bias in the analysis, or issues with the sample population. Limitations are usually addressed by the research team in some way to ensure a sound and acceptable research process. Occasionally, the limitations associated with the study can be significant and not addressed adequately, which leaves a consequential decision in the hands of the researcher. Providing critiques of studies in the literature review process gives the reader confidence that the researcher has carefully examined relevant work in preparation for the study and, ultimately, the manuscript.

A solid literature review clearly anchors the proposed study in the field and connects the research question(s), the methodological approach, and the discussion. Reviewing extant research leads to research questions that will contribute to what is known in the field. By summarizing what is known, the literature review points to what needs to be known, which in turn guides decisions about methodology. Finally, notable findings of the new study are discussed in reference to those described in the literature review.

Within published BER studies, literature reviews can be placed in different locations in an article. When included in the introductory section of the study, the first few paragraphs of the manuscript set the stage, with the literature review following the opening paragraphs. Cooper et al. (2019) illustrate this approach in their study of course-based undergraduate research experiences (CUREs). An introduction discussing the potential of CURES is followed by an analysis of the existing literature relevant to the design of CUREs that allows for novel student discoveries. Within this review, the authors point out contradictory findings among research on novel student discoveries. This clarifies the need for their study, which is described and highlighted through specific research aims.

A literature reviews can also make up a separate section in a paper. For example, the introduction to Todd et al. (2019) illustrates the need for their research topic by highlighting the potential of learning progressions (LPs) and suggesting that LPs may help mitigate learning loss in genetics. At the end of the introduction, the authors state their specific research questions. The review of literature following this opening section comprises two subsections. One focuses on learning loss in general and examines a variety of studies and meta-analyses from the disciplines of medical education, mathematics, and reading. The second section focuses specifically on LPs in genetics and highlights student learning in the midst of LPs. These separate reviews provide insights into the stated research question.

Suggestions and Advice

A well-conceptualized, comprehensive, and critical literature review reveals the understanding of the topic that the researcher brings to the study. Literature reviews should not be so big that there is no clear area of focus; nor should they be so narrow that no real research question arises. The task for a researcher is to craft an efficient literature review that offers a critical analysis of published work, articulates the need for the study, guides the methodological approach to the topic of study, and provides an adequate foundation for the discussion of the findings.

In our own writing of literature reviews, there are often many drafts. An early draft may seem well suited to the study because the need for and approach to the study are well described. However, as the results of the study are analyzed and findings begin to emerge, the existing literature review may be inadequate and need revision. The need for an expanded discussion about the research area can result in the inclusion of new studies that support the explanation of a potential finding. The literature review may also prove to be too broad. Refocusing on a specific area allows for more contemplation of a finding.

It should be noted that there are different types of literature reviews, and many books and articles have been written about the different ways to embark on these types of reviews. Among these different resources, the following may be helpful in considering how to refine the review process for scholarly journals:

  • Booth, A., Sutton, A., & Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. This book addresses different types of literature reviews and offers important suggestions pertaining to defining the scope of the literature review and assessing extant studies.
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., & Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago: University of Chicago Press. This book can help the novice consider how to make the case for an area of study. While this book is not specifically about literature reviews, it offers suggestions about making the case for your study.
  • Galvan, J. L., & Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). Routledge. This book offers guidance on writing different types of literature reviews. For the novice researcher, there are useful suggestions for creating coherent literature reviews.

THEORETICAL FRAMEWORKS

Purpose of theoretical frameworks.

As new education researchers may be less familiar with theoretical frameworks than with literature reviews, this discussion begins with an analogy. Envision a biologist, chemist, and physicist examining together the dramatic effect of a fog tsunami over the ocean. A biologist gazing at this phenomenon may be concerned with the effect of fog on various species. A chemist may be interested in the chemical composition of the fog as water vapor condenses around bits of salt. A physicist may be focused on the refraction of light to make fog appear to be “sitting” above the ocean. While observing the same “objective event,” the scientists are operating under different theoretical frameworks that provide a particular perspective or “lens” for the interpretation of the phenomenon. Each of these scientists brings specialized knowledge, experiences, and values to this phenomenon, and these influence the interpretation of the phenomenon. The scientists’ theoretical frameworks influence how they design and carry out their studies and interpret their data.

Within an educational study, a theoretical framework helps to explain a phenomenon through a particular lens and challenges and extends existing knowledge within the limitations of that lens. Theoretical frameworks are explicitly stated by an educational researcher in the paper’s framework, theory, or relevant literature section. The framework shapes the types of questions asked, guides the method by which data are collected and analyzed, and informs the discussion of the results of the study. It also reveals the researcher’s subjectivities, for example, values, social experience, and viewpoint ( Allen, 2017 ). It is essential that a novice researcher learn to explicitly state a theoretical framework, because all research questions are being asked from the researcher’s implicit or explicit assumptions of a phenomenon of interest ( Schwandt, 2000 ).

Selecting Theoretical Frameworks

Theoretical frameworks are one of the most contemplated elements in our work in educational research. In this section, we share three important considerations for new scholars selecting a theoretical framework.

The first step in identifying a theoretical framework involves reflecting on the phenomenon within the study and the assumptions aligned with the phenomenon. The phenomenon involves the studied event. There are many possibilities, for example, student learning, instructional approach, or group organization. A researcher holds assumptions about how the phenomenon will be effected, influenced, changed, or portrayed. It is ultimately the researcher’s assumption(s) about the phenomenon that aligns with a theoretical framework. An example can help illustrate how a researcher’s reflection on the phenomenon and acknowledgment of assumptions can result in the identification of a theoretical framework.

In our example, a biology education researcher may be interested in exploring how students’ learning of difficult biological concepts can be supported by the interactions of group members. The phenomenon of interest is the interactions among the peers, and the researcher assumes that more knowledgeable students are important in supporting the learning of the group. As a result, the researcher may draw on Vygotsky’s (1978) sociocultural theory of learning and development that is focused on the phenomenon of student learning in a social setting. This theory posits the critical nature of interactions among students and between students and teachers in the process of building knowledge. A researcher drawing upon this framework holds the assumption that learning is a dynamic social process involving questions and explanations among students in the classroom and that more knowledgeable peers play an important part in the process of building conceptual knowledge.

It is important to state at this point that there are many different theoretical frameworks. Some frameworks focus on learning and knowing, while other theoretical frameworks focus on equity, empowerment, or discourse. Some frameworks are well articulated, and others are still being refined. For a new researcher, it can be challenging to find a theoretical framework. Two of the best ways to look for theoretical frameworks is through published works that highlight different frameworks.

When a theoretical framework is selected, it should clearly connect to all parts of the study. The framework should augment the study by adding a perspective that provides greater insights into the phenomenon. It should clearly align with the studies described in the literature review. For instance, a framework focused on learning would correspond to research that reported different learning outcomes for similar studies. The methods for data collection and analysis should also correspond to the framework. For instance, a study about instructional interventions could use a theoretical framework concerned with learning and could collect data about the effect of the intervention on what is learned. When the data are analyzed, the theoretical framework should provide added meaning to the findings, and the findings should align with the theoretical framework.

A study by Jensen and Lawson (2011) provides an example of how a theoretical framework connects different parts of the study. They compared undergraduate biology students in heterogeneous and homogeneous groups over the course of a semester. Jensen and Lawson (2011) assumed that learning involved collaboration and more knowledgeable peers, which made Vygotsky’s (1978) theory a good fit for their study. They predicted that students in heterogeneous groups would experience greater improvement in their reasoning abilities and science achievements with much of the learning guided by the more knowledgeable peers.

In the enactment of the study, they collected data about the instruction in traditional and inquiry-oriented classes, while the students worked in homogeneous or heterogeneous groups. To determine the effect of working in groups, the authors also measured students’ reasoning abilities and achievement. Each data-collection and analysis decision connected to understanding the influence of collaborative work.

Their findings highlighted aspects of Vygotsky’s (1978) theory of learning. One finding, for instance, posited that inquiry instruction, as a whole, resulted in reasoning and achievement gains. This links to Vygotsky (1978) , because inquiry instruction involves interactions among group members. A more nuanced finding was that group composition had a conditional effect. Heterogeneous groups performed better with more traditional and didactic instruction, regardless of the reasoning ability of the group members. Homogeneous groups worked better during interaction-rich activities for students with low reasoning ability. The authors attributed the variation to the different types of helping behaviors of students. High-performing students provided the answers, while students with low reasoning ability had to work collectively through the material. In terms of Vygotsky (1978) , this finding provided new insights into the learning context in which productive interactions can occur for students.

Another consideration in the selection and use of a theoretical framework pertains to its orientation to the study. This can result in the theoretical framework prioritizing individuals, institutions, and/or policies ( Anfara and Mertz, 2014 ). Frameworks that connect to individuals, for instance, could contribute to understanding their actions, learning, or knowledge. Institutional frameworks, on the other hand, offer insights into how institutions, organizations, or groups can influence individuals or materials. Policy theories provide ways to understand how national or local policies can dictate an emphasis on outcomes or instructional design. These different types of frameworks highlight different aspects in an educational setting, which influences the design of the study and the collection of data. In addition, these different frameworks offer a way to make sense of the data. Aligning the data collection and analysis with the framework ensures that a study is coherent and can contribute to the field.

New understandings emerge when different theoretical frameworks are used. For instance, Ebert-May et al. (2015) prioritized the individual level within conceptual change theory (see Posner et al. , 1982 ). In this theory, an individual’s knowledge changes when it no longer fits the phenomenon. Ebert-May et al. (2015) designed a professional development program challenging biology postdoctoral scholars’ existing conceptions of teaching. The authors reported that the biology postdoctoral scholars’ teaching practices became more student-centered as they were challenged to explain their instructional decision making. According to the theory, the biology postdoctoral scholars’ dissatisfaction in their descriptions of teaching and learning initiated change in their knowledge and instruction. These results reveal how conceptual change theory can explain the learning of participants and guide the design of professional development programming.

The communities of practice (CoP) theoretical framework ( Lave, 1988 ; Wenger, 1998 ) prioritizes the institutional level , suggesting that learning occurs when individuals learn from and contribute to the communities in which they reside. Grounded in the assumption of community learning, the literature on CoP suggests that, as individuals interact regularly with the other members of their group, they learn about the rules, roles, and goals of the community ( Allee, 2000 ). A study conducted by Gehrke and Kezar (2017) used the CoP framework to understand organizational change by examining the involvement of individual faculty engaged in a cross-institutional CoP focused on changing the instructional practice of faculty at each institution. In the CoP, faculty members were involved in enhancing instructional materials within their department, which aligned with an overarching goal of instituting instruction that embraced active learning. Not surprisingly, Gehrke and Kezar (2017) revealed that faculty who perceived the community culture as important in their work cultivated institutional change. Furthermore, they found that institutional change was sustained when key leaders served as mentors and provided support for faculty, and as faculty themselves developed into leaders. This study reveals the complexity of individual roles in a COP in order to support institutional instructional change.

It is important to explicitly state the theoretical framework used in a study, but elucidating a theoretical framework can be challenging for a new educational researcher. The literature review can help to identify an applicable theoretical framework. Focal areas of the review or central terms often connect to assumptions and assertions associated with the framework that pertain to the phenomenon of interest. Another way to identify a theoretical framework is self-reflection by the researcher on personal beliefs and understandings about the nature of knowledge the researcher brings to the study ( Lysaght, 2011 ). In stating one’s beliefs and understandings related to the study (e.g., students construct their knowledge, instructional materials support learning), an orientation becomes evident that will suggest a particular theoretical framework. Theoretical frameworks are not arbitrary , but purposefully selected.

With experience, a researcher may find expanded roles for theoretical frameworks. Researchers may revise an existing framework that has limited explanatory power, or they may decide there is a need to develop a new theoretical framework. These frameworks can emerge from a current study or the need to explain a phenomenon in a new way. Researchers may also find that multiple theoretical frameworks are necessary to frame and explore a problem, as different frameworks can provide different insights into a problem.

Finally, it is important to recognize that choosing “x” theoretical framework does not necessarily mean a researcher chooses “y” methodology and so on, nor is there a clear-cut, linear process in selecting a theoretical framework for one’s study. In part, the nonlinear process of identifying a theoretical framework is what makes understanding and using theoretical frameworks challenging. For the novice scholar, contemplating and understanding theoretical frameworks is essential. Fortunately, there are articles and books that can help:

  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. This book provides an overview of theoretical frameworks in general educational research.
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research. Physical Review Physics Education Research , 15 (2), 020101-1–020101-13. This paper illustrates how a DBER field can use theoretical frameworks.
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems. Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 . This paper articulates the need for studies in BER to explicitly state theoretical frameworks and provides examples of potential studies.
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Sage. This book also provides an overview of theoretical frameworks, but for both research and evaluation.

CONCEPTUAL FRAMEWORKS

Purpose of a conceptual framework.

A conceptual framework is a description of the way a researcher understands the factors and/or variables that are involved in the study and their relationships to one another. The purpose of a conceptual framework is to articulate the concepts under study using relevant literature ( Rocco and Plakhotnik, 2009 ) and to clarify the presumed relationships among those concepts ( Rocco and Plakhotnik, 2009 ; Anfara and Mertz, 2014 ). Conceptual frameworks are different from theoretical frameworks in both their breadth and grounding in established findings. Whereas a theoretical framework articulates the lens through which a researcher views the work, the conceptual framework is often more mechanistic and malleable.

Conceptual frameworks are broader, encompassing both established theories (i.e., theoretical frameworks) and the researchers’ own emergent ideas. Emergent ideas, for example, may be rooted in informal and/or unpublished observations from experience. These emergent ideas would not be considered a “theory” if they are not yet tested, supported by systematically collected evidence, and peer reviewed. However, they do still play an important role in the way researchers approach their studies. The conceptual framework allows authors to clearly describe their emergent ideas so that connections among ideas in the study and the significance of the study are apparent to readers.

Constructing Conceptual Frameworks

Including a conceptual framework in a research study is important, but researchers often opt to include either a conceptual or a theoretical framework. Either may be adequate, but both provide greater insight into the research approach. For instance, a research team plans to test a novel component of an existing theory. In their study, they describe the existing theoretical framework that informs their work and then present their own conceptual framework. Within this conceptual framework, specific topics portray emergent ideas that are related to the theory. Describing both frameworks allows readers to better understand the researchers’ assumptions, orientations, and understanding of concepts being investigated. For example, Connolly et al. (2018) included a conceptual framework that described how they applied a theoretical framework of social cognitive career theory (SCCT) to their study on teaching programs for doctoral students. In their conceptual framework, the authors described SCCT, explained how it applied to the investigation, and drew upon results from previous studies to justify the proposed connections between the theory and their emergent ideas.

In some cases, authors may be able to sufficiently describe their conceptualization of the phenomenon under study in an introduction alone, without a separate conceptual framework section. However, incomplete descriptions of how the researchers conceptualize the components of the study may limit the significance of the study by making the research less intelligible to readers. This is especially problematic when studying topics in which researchers use the same terms for different constructs or different terms for similar and overlapping constructs (e.g., inquiry, teacher beliefs, pedagogical content knowledge, or active learning). Authors must describe their conceptualization of a construct if the research is to be understandable and useful.

There are some key areas to consider regarding the inclusion of a conceptual framework in a study. To begin with, it is important to recognize that conceptual frameworks are constructed by the researchers conducting the study ( Rocco and Plakhotnik, 2009 ; Maxwell, 2012 ). This is different from theoretical frameworks that are often taken from established literature. Researchers should bring together ideas from the literature, but they may be influenced by their own experiences as a student and/or instructor, the shared experiences of others, or thought experiments as they construct a description, model, or representation of their understanding of the phenomenon under study. This is an exercise in intellectual organization and clarity that often considers what is learned, known, and experienced. The conceptual framework makes these constructs explicitly visible to readers, who may have different understandings of the phenomenon based on their prior knowledge and experience. There is no single method to go about this intellectual work.

Reeves et al. (2016) is an example of an article that proposed a conceptual framework about graduate teaching assistant professional development evaluation and research. The authors used existing literature to create a novel framework that filled a gap in current research and practice related to the training of graduate teaching assistants. This conceptual framework can guide the systematic collection of data by other researchers because the framework describes the relationships among various factors that influence teaching and learning. The Reeves et al. (2016) conceptual framework may be modified as additional data are collected and analyzed by other researchers. This is not uncommon, as conceptual frameworks can serve as catalysts for concerted research efforts that systematically explore a phenomenon (e.g., Reynolds et al. , 2012 ; Brownell and Kloser, 2015 ).

Sabel et al. (2017) used a conceptual framework in their exploration of how scaffolds, an external factor, interact with internal factors to support student learning. Their conceptual framework integrated principles from two theoretical frameworks, self-regulated learning and metacognition, to illustrate how the research team conceptualized students’ use of scaffolds in their learning ( Figure 1 ). Sabel et al. (2017) created this model using their interpretations of these two frameworks in the context of their teaching.

An external file that holds a picture, illustration, etc.
Object name is cbe-21-rm33-g001.jpg

Conceptual framework from Sabel et al. (2017) .

A conceptual framework should describe the relationship among components of the investigation ( Anfara and Mertz, 2014 ). These relationships should guide the researcher’s methods of approaching the study ( Miles et al. , 2014 ) and inform both the data to be collected and how those data should be analyzed. Explicitly describing the connections among the ideas allows the researcher to justify the importance of the study and the rigor of the research design. Just as importantly, these frameworks help readers understand why certain components of a system were not explored in the study. This is a challenge in education research, which is rooted in complex environments with many variables that are difficult to control.

For example, Sabel et al. (2017) stated: “Scaffolds, such as enhanced answer keys and reflection questions, can help students and instructors bridge the external and internal factors and support learning” (p. 3). They connected the scaffolds in the study to the three dimensions of metacognition and the eventual transformation of existing ideas into new or revised ideas. Their framework provides a rationale for focusing on how students use two different scaffolds, and not on other factors that may influence a student’s success (self-efficacy, use of active learning, exam format, etc.).

In constructing conceptual frameworks, researchers should address needed areas of study and/or contradictions discovered in literature reviews. By attending to these areas, researchers can strengthen their arguments for the importance of a study. For instance, conceptual frameworks can address how the current study will fill gaps in the research, resolve contradictions in existing literature, or suggest a new area of study. While a literature review describes what is known and not known about the phenomenon, the conceptual framework leverages these gaps in describing the current study ( Maxwell, 2012 ). In the example of Sabel et al. (2017) , the authors indicated there was a gap in the literature regarding how scaffolds engage students in metacognition to promote learning in large classes. Their study helps fill that gap by describing how scaffolds can support students in the three dimensions of metacognition: intelligibility, plausibility, and wide applicability. In another example, Lane (2016) integrated research from science identity, the ethic of care, the sense of belonging, and an expertise model of student success to form a conceptual framework that addressed the critiques of other frameworks. In a more recent example, Sbeglia et al. (2021) illustrated how a conceptual framework influences the methodological choices and inferences in studies by educational researchers.

Sometimes researchers draw upon the conceptual frameworks of other researchers. When a researcher’s conceptual framework closely aligns with an existing framework, the discussion may be brief. For example, Ghee et al. (2016) referred to portions of SCCT as their conceptual framework to explain the significance of their work on students’ self-efficacy and career interests. Because the authors’ conceptualization of this phenomenon aligned with a previously described framework, they briefly mentioned the conceptual framework and provided additional citations that provided more detail for the readers.

Within both the BER and the broader DBER communities, conceptual frameworks have been used to describe different constructs. For example, some researchers have used the term “conceptual framework” to describe students’ conceptual understandings of a biological phenomenon. This is distinct from a researcher’s conceptual framework of the educational phenomenon under investigation, which may also need to be explicitly described in the article. Other studies have presented a research logic model or flowchart of the research design as a conceptual framework. These constructions can be quite valuable in helping readers understand the data-collection and analysis process. However, a model depicting the study design does not serve the same role as a conceptual framework. Researchers need to avoid conflating these constructs by differentiating the researchers’ conceptual framework that guides the study from the research design, when applicable.

Explicitly describing conceptual frameworks is essential in depicting the focus of the study. We have found that being explicit in a conceptual framework means using accepted terminology, referencing prior work, and clearly noting connections between terms. This description can also highlight gaps in the literature or suggest potential contributions to the field of study. A well-elucidated conceptual framework can suggest additional studies that may be warranted. This can also spur other researchers to consider how they would approach the examination of a phenomenon and could result in a revised conceptual framework.

It can be challenging to create conceptual frameworks, but they are important. Below are two resources that could be helpful in constructing and presenting conceptual frameworks in educational research:

  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. Chapter 3 in this book describes how to construct conceptual frameworks.
  • Ravitch, S. M., & Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. This book explains how conceptual frameworks guide the research questions, data collection, data analyses, and interpretation of results.

CONCLUDING THOUGHTS

Literature reviews, theoretical frameworks, and conceptual frameworks are all important in DBER and BER. Robust literature reviews reinforce the importance of a study. Theoretical frameworks connect the study to the base of knowledge in educational theory and specify the researcher’s assumptions. Conceptual frameworks allow researchers to explicitly describe their conceptualization of the relationships among the components of the phenomenon under study. Table 1 provides a general overview of these components in order to assist biology education researchers in thinking about these elements.

It is important to emphasize that these different elements are intertwined. When these elements are aligned and complement one another, the study is coherent, and the study findings contribute to knowledge in the field. When literature reviews, theoretical frameworks, and conceptual frameworks are disconnected from one another, the study suffers. The point of the study is lost, suggested findings are unsupported, or important conclusions are invisible to the researcher. In addition, this misalignment may be costly in terms of time and money.

Conducting a literature review, selecting a theoretical framework, and building a conceptual framework are some of the most difficult elements of a research study. It takes time to understand the relevant research, identify a theoretical framework that provides important insights into the study, and formulate a conceptual framework that organizes the finding. In the research process, there is often a constant back and forth among these elements as the study evolves. With an ongoing refinement of the review of literature, clarification of the theoretical framework, and articulation of a conceptual framework, a sound study can emerge that makes a contribution to the field. This is the goal of BER and education research.

Supplementary Material

  • Allee, V. (2000). Knowledge networks and communities of learning . OD Practitioner , 32 ( 4 ), 4–13. [ Google Scholar ]
  • Allen, M. (2017). The Sage encyclopedia of communication research methods (Vols. 1–4 ). Los Angeles, CA: Sage. 10.4135/9781483381411 [ CrossRef ] [ Google Scholar ]
  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action . Washington, DC. [ Google Scholar ]
  • Anfara, V. A., Mertz, N. T. (2014). Setting the stage . In Anfara, V. A., Mertz, N. T. (eds.), Theoretical frameworks in qualitative research (pp. 1–22). Sage. [ Google Scholar ]
  • Barnes, M. E., Brownell, S. E. (2016). Practices and perspectives of college instructors on addressing religious beliefs when teaching evolution . CBE—Life Sciences Education , 15 ( 2 ), ar18. https://doi.org/10.1187/cbe.15-11-0243 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Boote, D. N., Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation . Educational Researcher , 34 ( 6 ), 3–15. 10.3102/0013189x034006003 [ CrossRef ] [ Google Scholar ]
  • Booth, A., Sutton, A., Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago, IL: University of Chicago Press. [ Google Scholar ]
  • Brownell, S. E., Kloser, M. J. (2015). Toward a conceptual framework for measuring the effectiveness of course-based undergraduate research experiences in undergraduate biology . Studies in Higher Education , 40 ( 3 ), 525–544. https://doi.org/10.1080/03075079.2015.1004234 [ Google Scholar ]
  • Connolly, M. R., Lee, Y. G., Savoy, J. N. (2018). The effects of doctoral teaching development on early-career STEM scholars’ college teaching self-efficacy . CBE—Life Sciences Education , 17 ( 1 ), ar14. https://doi.org/10.1187/cbe.17-02-0039 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cooper, K. M., Blattman, J. N., Hendrix, T., Brownell, S. E. (2019). The impact of broadly relevant novel discoveries on student project ownership in a traditional lab course turned CURE . CBE—Life Sciences Education , 18 ( 4 ), ar57. https://doi.org/10.1187/cbe.19-06-0113 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • DeHaan, R. L. (2011). Education research in the biological sciences: A nine decade review (Paper commissioned by the NAS/NRC Committee on the Status, Contributions, and Future Directions of Discipline Based Education Research) . Washington, DC: National Academies Press. Retrieved May 20, 2022, from www7.nationalacademies.org/bose/DBER_Mee ting2_commissioned_papers_page.html [ Google Scholar ]
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research . Physical Review Physics Education Research , 15 ( 2 ), 020101. [ Google Scholar ]
  • Dirks, C. (2011). The current status and future direction of biology education research . Paper presented at: Second Committee Meeting on the Status, Contributions, and Future Directions of Discipline-Based Education Research, 18–19 October (Washington, DC). Retrieved May 20, 2022, from http://sites.nationalacademies.org/DBASSE/BOSE/DBASSE_071087 [ Google Scholar ]
  • Duran, R. P., Eisenhart, M. A., Erickson, F. D., Grant, C. A., Green, J. L., Hedges, L. V., Schneider, B. L. (2006). Standards for reporting on empirical social science research in AERA publications: American Educational Research Association . Educational Researcher , 35 ( 6 ), 33–40. [ Google Scholar ]
  • Ebert-May, D., Derting, T. L., Henkel, T. P., Middlemis Maher, J., Momsen, J. L., Arnold, B., Passmore, H. A. (2015). Breaking the cycle: Future faculty begin teaching with learner-centered strategies after professional development . CBE—Life Sciences Education , 14 ( 2 ), ar22. https://doi.org/10.1187/cbe.14-12-0222 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Galvan, J. L., Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). New York, NY: Routledge. https://doi.org/10.4324/9781315229386 [ Google Scholar ]
  • Gehrke, S., Kezar, A. (2017). The roles of STEM faculty communities of practice in institutional and departmental reform in higher education . American Educational Research Journal , 54 ( 5 ), 803–833. https://doi.org/10.3102/0002831217706736 [ Google Scholar ]
  • Ghee, M., Keels, M., Collins, D., Neal-Spence, C., Baker, E. (2016). Fine-tuning summer research programs to promote underrepresented students’ persistence in the STEM pathway . CBE—Life Sciences Education , 15 ( 3 ), ar28. https://doi.org/10.1187/cbe.16-01-0046 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Institute of Education Sciences & National Science Foundation. (2013). Common guidelines for education research and development . Retrieved May 20, 2022, from www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf
  • Jensen, J. L., Lawson, A. (2011). Effects of collaborative group composition and inquiry instruction on reasoning gains and achievement in undergraduate biology . CBE—Life Sciences Education , 10 ( 1 ), 64–73. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kolpikova, E. P., Chen, D. C., Doherty, J. H. (2019). Does the format of preclass reading quizzes matter? An evaluation of traditional and gamified, adaptive preclass reading quizzes . CBE—Life Sciences Education , 18 ( 4 ), ar52. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Labov, J. B., Reid, A. H., Yamamoto, K. R. (2010). Integrated biology and undergraduate science education: A new biology education for the twenty-first century? CBE—Life Sciences Education , 9 ( 1 ), 10–16. https://doi.org/10.1187/cbe.09-12-0092 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lane, T. B. (2016). Beyond academic and social integration: Understanding the impact of a STEM enrichment program on the retention and degree attainment of underrepresented students . CBE—Life Sciences Education , 15 ( 3 ), ar39. https://doi.org/10.1187/cbe.16-01-0070 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life . New York, NY: Cambridge University Press. [ Google Scholar ]
  • Lo, S. M., Gardner, G. E., Reid, J., Napoleon-Fanis, V., Carroll, P., Smith, E., Sato, B. K. (2019). Prevailing questions and methodologies in biology education research: A longitudinal analysis of research in CBE — Life Sciences Education and at the Society for the Advancement of Biology Education Research . CBE—Life Sciences Education , 18 ( 1 ), ar9. https://doi.org/10.1187/cbe.18-08-0164 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lysaght, Z. (2011). Epistemological and paradigmatic ecumenism in “Pasteur’s quadrant:” Tales from doctoral research . In Official Conference Proceedings of the Third Asian Conference on Education in Osaka, Japan . Retrieved May 20, 2022, from http://iafor.org/ace2011_offprint/ACE2011_offprint_0254.pdf
  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Miles, M. B., Huberman, A. M., Saldaña, J. (2014). Qualitative data analysis (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems . Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 [ Google Scholar ]
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Los Angeles, CA: Sage. [ Google Scholar ]
  • Perry, J., Meir, E., Herron, J. C., Maruca, S., Stal, D. (2008). Evaluating two approaches to helping college students understand evolutionary trees through diagramming tasks . CBE—Life Sciences Education , 7 ( 2 ), 193–201. https://doi.org/10.1187/cbe.07-01-0007 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Posner, G. J., Strike, K. A., Hewson, P. W., Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change . Science Education , 66 ( 2 ), 211–227. [ Google Scholar ]
  • Ravitch, S. M., Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. [ Google Scholar ]
  • Reeves, T. D., Marbach-Ad, G., Miller, K. R., Ridgway, J., Gardner, G. E., Schussler, E. E., Wischusen, E. W. (2016). A conceptual framework for graduate teaching assistant professional development evaluation and research . CBE—Life Sciences Education , 15 ( 2 ), es2. https://doi.org/10.1187/cbe.15-10-0225 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds, J. A., Thaiss, C., Katkin, W., Thompson, R. J. Jr. (2012). Writing-to-learn in undergraduate science education: A community-based, conceptually driven approach . CBE—Life Sciences Education , 11 ( 1 ), 17–25. https://doi.org/10.1187/cbe.11-08-0064 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rocco, T. S., Plakhotnik, M. S. (2009). Literature reviews, conceptual frameworks, and theoretical frameworks: Terms, functions, and distinctions . Human Resource Development Review , 8 ( 1 ), 120–130. https://doi.org/10.1177/1534484309332617 [ Google Scholar ]
  • Rodrigo-Peiris, T., Xiang, L., Cassone, V. M. (2018). A low-intensity, hybrid design between a “traditional” and a “course-based” research experience yields positive outcomes for science undergraduate freshmen and shows potential for large-scale application . CBE—Life Sciences Education , 17 ( 4 ), ar53. https://doi.org/10.1187/cbe.17-11-0248 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sabel, J. L., Dauer, J. T., Forbes, C. T. (2017). Introductory biology students’ use of enhanced answer keys and reflection questions to engage in metacognition and enhance understanding . CBE—Life Sciences Education , 16 ( 3 ), ar40. https://doi.org/10.1187/cbe.16-10-0298 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sbeglia, G. C., Goodridge, J. A., Gordon, L. H., Nehm, R. H. (2021). Are faculty changing? How reform frameworks, sampling intensities, and instrument measures impact inferences about student-centered teaching practices . CBE—Life Sciences Education , 20 ( 3 ), ar39. https://doi.org/10.1187/cbe.20-11-0259 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism . In Denzin, N. K., Lincoln, Y. S. (Eds.), Handbook of qualitative research (2nd ed., pp. 189–213). Los Angeles, CA: Sage. [ Google Scholar ]
  • Sickel, A. J., Friedrichsen, P. (2013). Examining the evolution education literature with a focus on teachers: Major findings, goals for teacher preparation, and directions for future research . Evolution: Education and Outreach , 6 ( 1 ), 23. https://doi.org/10.1186/1936-6434-6-23 [ Google Scholar ]
  • Singer, S. R., Nielsen, N. R., Schweingruber, H. A. (2012). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering . Washington, DC: National Academies Press. [ Google Scholar ]
  • Todd, A., Romine, W. L., Correa-Menendez, J. (2019). Modeling the transition from a phenotypic to genotypic conceptualization of genetics in a university-level introductory biology context . Research in Science Education , 49 ( 2 ), 569–589. https://doi.org/10.1007/s11165-017-9626-2 [ Google Scholar ]
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge, MA: Harvard University Press. [ Google Scholar ]
  • Wenger, E. (1998). Communities of practice: Learning as a social system . Systems Thinker , 9 ( 5 ), 2–3. [ Google Scholar ]
  • Ziadie, M. A., Andrews, T. C. (2018). Moving evolution education forward: A systematic analysis of literature to identify gaps in collective knowledge for teaching . CBE—Life Sciences Education , 17 ( 1 ), ar11. https://doi.org/10.1187/cbe.17-08-0190 [ PMC free article ] [ PubMed ] [ Google Scholar ]

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved February 15, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Book cover

World Conference on Qualitative Research

WCQR 2022: Computer Supported Qualitative Research pp 194–210 Cite as

How to Operate Literature Review Through Qualitative and Quantitative Analysis Integration?

  • Eduardo Amadeu Dutra Moresi   ORCID: orcid.org/0000-0001-6058-3883 13 ,
  • Isabel Pinho   ORCID: orcid.org/0000-0003-1714-8979 14 &
  • António Pedro Costa   ORCID: orcid.org/0000-0002-4644-5879 14  
  • Conference paper
  • First Online: 05 May 2022

419 Accesses

1 Citations

Part of the Lecture Notes in Networks and Systems book series (LNNS,volume 466)

Usually, a literature review takes time and becomes a demanding step in any research project. The proposal presented in this article intends to structure this work in an organised and transparent way for all project participants and the structured elaboration of its report. Integrating qualitative and quantitative analysis provides opportunities to carry out a solid, practical, and in-depth literature review. The purpose of this article is to present a guide that explores the potentials of qualitative and quantitative analysis integration to develop a solid and replicable literature review. The paper proposes an integrative approach comprising six steps: 1) research design; 2) Data Collection for bibliometric analysis; 3) Search string refinement; 4) Bibliometric analysis; 5) qualitative analysis; and 6) report and dissemination of research results. These guidelines can facilitate the bibliographic analysis process and relevant article sample selection. Once the sample of publications is defined, it is possible to conduct a deep analysis through Content Analysis. Software tools, such as R Bibliometrix, VOSviewer, Gephi, yEd and webQDA, can be used for practical work during all collection, analysis, and reporting processes. From a large amount of data, selecting a sample of relevant literature is facilitated by interpreting bibliometric results. The specification of the methodology allows the replication and updating of the literature review in an interactive, systematic, and collaborative way giving a more transparent and organised approach to improving the literature review.

  • Quantitative analysis
  • Qualitative analysis
  • Bibliometric analysis
  • Science mapping

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Pritchard, A.: Statistical bibliography or bibliometrics? J. Doc. 25 (4), 348–349 (1969)

Google Scholar  

Nalimov, V., Mulcjenko, B.: Measurement of Science: Study of the Development of Science as an Information Process. Foreign Technology Division, Washington DC (1971)

Hugar, J.G., Bachlapur, M.M., Gavisiddappa, A.: Research contribution of bibliometric studies as reflected in web of science from 2013 to 2017. Libr. Philos. Pract. (e-journal), 1–13 (2019). https://digitalcommons.unl.edu/libphilprac/2319

Verma, M.K., Shukla, R.: Library herald-2008–2017: a bibliometric study. Libr. Philos. Pract. (e-journal), 2–12 (2018). https://digitalcommons.unl.edu/libphilprac/1762

Pandita, R.: Annals of library and information studies (ALIS) journal: a bibliometric study (2002–2012). DESIDOC J. Libr. Inf. Technol. 33 (6), 493–497 (2013)

Article   Google Scholar  

Kannan, P., Thanuskodi, S.: Bibliometric analysis of library philosophy and practice: a study based on scopus database. Libr. Philos. Pract. (e-journal), 1–13 (2019). https://digitalcommons.unl.edu/libphilprac/2300/

Marín-Marín, J.-A., Moreno-Guerrero, A.-J., Dúo-Terrón, P., López-Belmonte, J.: STEAM in education: a bibliometric analysis of performance and co-words in Web of Science. Int. J. STEM Educ. 8 (1) (2021). Article number 41

Khalife, M.A., Dunay, A., Illés, C.B.: Bibliometric analysis of articles on project management research. Periodica Polytechnica Soc. Manag. Sci. 29 (1), 70–83 (2021)

Pech, G., Delgado, C.: Screening the most highly cited papers in longitudinal bibliometric studies and systematic literature reviews of a research field or journal: widespread used metrics vs a percentile citation-based approach. J. Informet. 15 (3), 101161 (2021)

Das, D.: Journal of informetrics: a bibliometric study. Libr. Philos. Pract. (e-journal), 1–15 (2021). https://digitalcommons.unl.edu/libphilprac/5495/

Schmidt, F.: Meta-analysis: a constantly evolving research integration tool. Organ. Res. Methods 11 (1), 96–113 (2008)

Zupic, I., Cater, T.: Bibliometric methods in management organisation. Organ. Res. Methods 18 (3), 429–472 (2014)

Noyons, E., Moed, H., Luwel, M.: Combining mapping and citation analysis for evaluative bibliometric purposes: a bibliometric study. J. Am. Soc. Inf. Sci. 50 , 115–131 (1999)

van Rann, A.: Measuring science. Capita selecta of current main issues. In: Moed, H., Glänzel, W., Schmoch, U. (eds.) Handbook of Quantitative Science and Technology Research, pp. 19–50. Kluwer Academic, Dordrecht (2004)

Chapter   Google Scholar  

Garfield, E.: Citation analysis as a tool in journal evaluation. Science 178 , 417–479 (1972)

Hirsch, J.: An index to quantify an individuals scientific research output. In: Proceedings of the National Academy of Sciences, vol. 102, pp. 16569–1657. National Academy of Sciences, Washington DC (2005)

Cobo, M., López-Herrera, A., Herrera-Viedma, E., Herrera, F.: Science mapping software tools: review, analysis and cooperative study among tools. J. Am. Soc. Inform. Sci. Technol. 62 , 1382–1402 (2011)

Noyons, E., Moed, H., van Rann, A.: Integrating research perfomance analysis and science mapping. Scientometrics 46 , 591–604 (1999)

Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., Lim, W.M.: How to conduct a bibliometric analysis: an overview and guidelines. J. Bus. Res. 133 , 285–296 (2021)

Aria, M., Cuccurullo, C.: Bibliometrix: an R-tool for comprehensive science mapping analysis. J. Informet. 11 (4), 959–975 (2017)

Aria, M., Cuccurullo, C.: Package ‘bibliometrix’. https://cran.r-project.org/web/packages/bibliometrix/bibliometrix.pdf . Accessed 10 July 2021

Börner, K., Chen, C., Boyack, K.: Visualisingg knowledge domains. Ann. Rev. Inf. Sci. Technol. 37 , 179–255 (2003)

Morris, S., van der Veer Martens, B.: Mapping research specialities. Ann. Rev. Inf. Sci. Technol. 42 , 213–295 (2008)

Zitt, M., Ramanana-Rahary, S., Bassecoulard, E.: Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics 63 (2), 373–401 (2005)

Li, L.L., Ding, G., Feng, N., Wang, M.-H., Ho, Y.-S.: Global stem cell research trend: bibliometric analysis as a tool for mapping trends from 1991 to 2006. Scientometrics 80 (1), 9–58 (2009)

Ebrahim, A.N., Salehi, H., Embi, M.A., Tanha, F.H., Gholizadeh, H., Motahar, S.M.: Visibility and citation impact. Int. Educ. Stud. 7 (4), 120–125 (2014)

Canas-Guerrero, I., Mazarrón, F.R., Calleja-Perucho, C., Pou-Merina, A.: Bibliometric analysis in the international context of the “construction & building technology” category from the web of science database. Constr. Build. Mater. 53 , 13–25 (2014)

Gaviria-Marin, M., Merigó, J.M., Baier-Fuentes, H.: Knowledge management: a global examination based on bibliometric analysis. Technol. Forecast. Soc. Chang. 140 , 194–220 (2019)

Heradio, R., Perez-Morago, H., Fernandez-Amoros, D., Javier Cabrerizo, F., Herrera-Viedma, E.: A bibliometric analysis of 20 years of research on software product lines. Inf. Softw. Technol. 72 , 1–15 (2016)

Furstenau, L.B., et al.: Link between sustainability and industry 4.0: trends, challenges and new perspectives. IEEE Access 8 , 140079–140096 (2020). Article 9151934

van Eck, N.J., Waltman, L.: VOSviewer manual. Universiteit Leiden, Leiden (2021)

Bastian, M., Heymann, S., Jacomy, M.: Gephi: an open source software for exploring and manipulating networks. In: Proceedings of the Third International ICWSM Conference, pp. 361–362. Association for the Advancement of Artificial Intelligence, San Jose CA (2009)

Chen, C.: How to use CiteSpace. Leanpub, Victoria, British Columbia, CA (2019)

yWorks.: yEd Graph Editor Manual. https://yed.yworks.com/support/manual/index.html . Accessed 13 July 2020

Moresi, E.A.D., Pierozzi Júnior, I.: Representação do conhecimento para ciência e tecnologia: construindo uma sistematização metodológica. In: 16th International Conference on Information Systems and Technology Management, TECSI, São Paulo SP (2019). Article 6275

Moresi, E.A.D., Pinho, I.: Proposta de abordagem para refinamento de pesquisa bibliográfica. New Trends Qual. Res. 9 , 11–20 (2021)

Moresi, E.A.D., Pinho, I.: Como identificar os tópicos emergentes de um tema de investigação? New Trends Qual. Res. 9 , 46–55 (2021)

Chen, Y.H., Chen, C.Y., Lee, S.C.: Technology forecasting of new clean energy: the example of hydrogen energy and fuel cell. Afr. J. Bus. Manag. 4 (7), 1372–1380 (2010)

Ernst, H.: The use of patent data for technological forecasting: the diffusion of CNC-technology in the machine tool industry. Small Bus. Econ. 9 (4), 361–381 (1997)

Chen, C.: Science mapping: a systematic review of the literature. J. Data Inf. Sci. 2 (2), 1–40 (2017)

Prabhakaran, T., Lathabai, H.H., Changat, M.: Detection of paradigm shifts and emerging fields using scientific network: a case study of information technology for engineering. Technol. Forecast. Soc. Change 91 , 124–145 (2015)

Klavans, R., Boyack, K.W.: Identifying a better measure of relatedness for mapping science. J. Am. Soc. Inf. Sci. 57 (2), 251–263 (2006)

Kauffman, J., Kittas, A., Bennett, L., Tsoka, S.: DyCoNet: a Gephi plugin for community detection in dynamic complex networks. PLoS ONE 9 (7), e101357 (2014)

Grant, M.J., Booth, A.: A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26 (2), 91–108 (2009)

Costa, A.P., Soares, C.B., Fornari, L., Pinho, I.: Revisão da Literatura com Apoio de Software - Contribuição da Pesquisa Qualitativa. Ludomedia, Aveiro Portugal (2019)

Tranfield, D., Denyer, D., Smart, P.: Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 14 (3), 207–222 (2003)

Costa, A.P., Amado, J.: Content Analysis Supported by Software. Ludomedia, Oliveira de Azeméis - Aveiro - Portugal (2018)

Pinho, I., Leite, D.: Doing a literature review using content analysis - research networks review. In: Atas CIAIQ 2014 - Investigação Qualitativa em Ciências Sociais, vol. 3, pp. 377–378. Ludomedia, Aveiro Portugal (2014)

White, M.D., Marsh, E.E.: Content analysis: a flexible methodology. Libr. Trends 55 (1), 22–45 (2006)

Souza, F.N., Neri, D., Costa, A.P.: Asking questions in the qualitative research context. Qual. Rep. 21 (13), 6–18 (2016)

Pinho, I., Pinho, C., Rosa, M.J.: Research evaluation: mapping the field structure. Avaliação: Revista da Avaliação da Educação Superior (Campinas) 25 , 546–574 (2020)

Costa, A., Moreira, A. de Souza, F.: webQDA - Qualitative Data Analysis (2019). www.webqda.net

Download references

Author information

Authors and affiliations.

Catholic University of Brasília, Brasília, DF, 71966-700, Brazil

Eduardo Amadeu Dutra Moresi

University of Aveiro, 3810-193, Aveiro, Portugal

Isabel Pinho & António Pedro Costa

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Eduardo Amadeu Dutra Moresi .

Editor information

Editors and affiliations.

Department of Education and Psychology, University of Aveiro, Aveiro, Portugal

António Pedro Costa

António Moreira

Department Didactics, Organization and Research Methods, University of Salamanca, Salamanca, Salamanca, Spain

Maria Cruz Sánchez‑Gómez

Adventist University of Africa, Nairobi, Kenya

Safary Wa-Mbaleka

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Moresi, E.A.D., Pinho, I., Costa, A.P. (2022). How to Operate Literature Review Through Qualitative and Quantitative Analysis Integration?. In: Costa, A.P., Moreira, A., Sánchez‑Gómez, M.C., Wa-Mbaleka, S. (eds) Computer Supported Qualitative Research. WCQR 2022. Lecture Notes in Networks and Systems, vol 466. Springer, Cham. https://doi.org/10.1007/978-3-031-04680-3_13

Download citation

DOI : https://doi.org/10.1007/978-3-031-04680-3_13

Published : 05 May 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-04679-7

Online ISBN : 978-3-031-04680-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Duke University Libraries

Literature Reviews

  • Types of reviews
  • Getting started

Types of reviews and examples

Choosing a review type.

  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

review of related literature in quantitative research example

Contact a Librarian

Ask a Librarian

Overview of types of literature reviews

Made with  Visme Infographic Maker

  • Literature (narrative)
  • Scoping / Evidence map
  • Meta-analysis

Characteristics:

  • Provides examination of recent or current literature on a wide range of subjects
  • Varying levels of completeness / comprehensiveness, non-standardized methodology
  • May or may not include comprehensive searching, quality assessment or critical appraisal

Mitchell, L. E., & Zajchowski, C. A. (2022). The history of air quality in Utah: A narrative review.  Sustainability ,  14 (15), 9653.  doi.org/10.3390/su14159653

  • Assessment of what is already known about an issue
  • Similar to a systematic review but within a time-constrained setting
  • Typically employs methodological shortcuts, increasing risk of introducing bias, includes basic level of quality assessment
  • Best suited for issues needing quick decisions and solutions (i.e., policy recommendations)

Learn more about the method:

Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: the evolution of a rapid review approach.  Systematic reviews, 1 (1), 1-9.  https://doi.org/10.1186/2046-4053-1-10

Virginia Commonwealth University Libraries. (2021). Rapid Review Protocol .

Quarmby, S., Santos, G., & Mathias, M. (2019). Air quality strategies and technologies: A rapid review of the international evidence.  Sustainability, 11 (10), 2757.  https://doi.org/10.3390/su11102757

  • Compiles evidence from multiple reviews into one document
  • Often defines a broader question than is typical of a traditional systematic review.

Choi, G. J., & Kang, H. (2022). The umbrella review: a useful strategy in the rain of evidence.  The Korean Journal of Pain ,  35 (2), 127–128.  https://doi.org/10.3344/kjp.2022.35.2.127

Aromataris, E., Fernandez, R., Godfrey, C. M., Holly, C., Khalil, H., & Tungpunkom, P. (2015). Summarizing systematic reviews: Methodological development, conduct and reporting of an umbrella review approach. International Journal of Evidence-Based Healthcare , 13(3), 132–140. https://doi.org/10.1097/XEB.0000000000000055

Rojas-Rueda, D., Morales-Zamora, E., Alsufyani, W. A., Herbst, C. H., Al Balawi, S. M., Alsukait, R., & Alomran, M. (2021). Environmental risk factors and health: An umbrella review of meta-analyses.  International Journal of Environmental Research and Public Dealth ,  18 (2), 704.  https://doi.org/10.3390/ijerph18020704

  • Main purpose is to map out and categorize existing literature, identify gaps in literature
  • Search comprehensiveness determined by time/scope constraints, could take longer than a systematic review
  • No formal quality assessment or critical appraisal

Learn more about the methods :

Arksey, H., & O'Malley, L. (2005) Scoping studies: towards a methodological framework.  International Journal of Social Research Methodology ,  8 (1), 19-32.  https://doi.org/10.1080/1364557032000119616

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science: IS, 5, 69. https://doi.org/10.1186/1748-5908-5-69

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.  Systematic reviews, 5 (1), 1-21.  https://doi.org/10.1186/s13643-016-0204-x

Example : 

Rahman, A., Sarkar, A., Yadav, O. P., Achari, G., & Slobodnik, J. (2021). Potential human health risks due to environmental exposure to nano-and microplastics and knowledge gaps: A scoping review.  Science of the Total Environment, 757 , 143872.  https://doi.org/10.1016/j.scitotenv.2020.143872

  • Seeks to systematically search for, appraise, and synthesize research evidence
  • Adheres to strict guidelines, protocols, and frameworks
  • Time-intensive and often take months to a year or more to complete. 
  • The most commonly referred to type of evidence synthesis. Sometimes confused as a blanket term for other types of reviews.

Gascon, M., Triguero-Mas, M., Martínez, D., Dadvand, P., Forns, J., Plasència, A., & Nieuwenhuijsen, M. J. (2015). Mental health benefits of long-term exposure to residential green and blue spaces: a systematic review.  International Journal of Environmental Research and Public Health ,  12 (4), 4354–4379.  https://doi.org/10.3390/ijerph120404354

  • Statistical technique for combining results of quantitative studies to provide more precise effect of results
  • Aims for exhaustive, comprehensive searching
  • Quality assessment may determine inclusion/exclusion criteria
  • May be conducted independently or as part of a systematic review

Berman, N. G., & Parker, R. A. (2002). Meta-analysis: Neither quick nor easy. BMC Medical Research Methodology , 2(1), 10. https://doi.org/10.1186/1471-2288-2-10

Hites R. A. (2004). Polybrominated diphenyl ethers in the environment and in people: a meta-analysis of concentrations.  Environmental Science & Technology ,  38 (4), 945–956.  https://doi.org/10.1021/es035082g

Flowchart of review types

  • Review Decision Tree - Cornell University For more information, check out Cornell's review methodology decision tree.
  • LitR-Ex.com - Eight literature review methodologies Learn more about 8 different review types (incl. Systematic Reviews and Scoping Reviews) with practical tips about strengths and weaknesses of different methods.
  • << Previous: Getting started
  • Next: 1. Define your research question >>
  • Last Updated: Feb 15, 2024 1:45 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

  • Systematic review
  • Open access
  • Published: 12 February 2024

Exploring the role of professional identity in the implementation of clinical decision support systems—a narrative review

  • Sophia Ackerhans   ORCID: orcid.org/0009-0005-9269-6854 1 ,
  • Thomas Huynh 1 ,
  • Carsten Kaiser 1 &
  • Carsten Schultz 1  

Implementation Science volume  19 , Article number:  11 ( 2024 ) Cite this article

482 Accesses

3 Altmetric

Metrics details

Clinical decision support systems (CDSSs) have the potential to improve quality of care, patient safety, and efficiency because of their ability to perform medical tasks in a more data-driven, evidence-based, and semi-autonomous way. However, CDSSs may also affect the professional identity of health professionals. Some professionals might experience these systems as a threat to their professional identity, as CDSSs could partially substitute clinical competencies, autonomy, or control over the care process. Other professionals may experience an empowerment of the role in the medical system. The purpose of this study is to uncover the role of professional identity in CDSS implementation and to identify core human, technological, and organizational factors that may determine the effect of CDSSs on professional identity.

We conducted a systematic literature review and included peer-reviewed empirical studies from two electronic databases (PubMed, Web of Science) that reported on key factors to CDSS implementation and were published between 2010 and 2023. Our explorative, inductive thematic analysis assessed the antecedents of professional identity-related mechanisms from the perspective of different health care professionals (i.e., physicians, residents, nurse practitioners, pharmacists).

One hundred thirty-one qualitative, quantitative, or mixed-method studies from over 60 journals were included in this review. The thematic analysis found three dimensions of professional identity-related mechanisms that influence CDSS implementation success: perceived threat or enhancement of professional control and autonomy, perceived threat or enhancement of professional skills and expertise, and perceived loss or gain of control over patient relationships. At the technological level, the most common issues were the system’s ability to fit into existing clinical workflows and organizational structures, and its ability to meet user needs. At the organizational level, time pressure and tension, as well as internal communication and involvement of end users were most frequently reported. At the human level, individual attitudes and emotional responses, as well as familiarity with the system, most often influenced the CDSS implementation. Our results show that professional identity-related mechanisms are driven by these factors and influence CDSS implementation success. The perception of the change of professional identity is influenced by the user’s professional status and expertise and is improved over the course of implementation.

This review highlights the need for health care managers to evaluate perceived professional identity threats to health care professionals across all implementation phases when introducing a CDSS and to consider their varying manifestations among different health care professionals. Moreover, it highlights the importance of innovation and change management approaches, such as involving health professionals in the design and implementation process to mitigate threat perceptions. We provide future areas of research for the evaluation of the professional identity construct within health care.

Peer Review reports

Contributions to the literature

We provide a comprehensive literature review and narrative synthesis of the role of professional identity in CDSS implementation among diverse health care professionals and identify human, technological, and organizational determinants that influence professional identity and implementation.

The review shows that a perceived threat to professional identity plays a significant role in explaining failures of CDSS implementation. As such, our study highlights the need to recognize significant challenges related to professional identity in the implementation of CDSS and similar technologies. A better understanding and awareness of individual barriers to CDSS implementation among health professionals can promote the diffusion of such data-driven tools in health care.

This narrative synthesis maps, interconnects, and reinterprets existing empirical research and provides a foundation for further research to explore the complex interrelationships and influences of perceived professional identity-related mechanisms among health care professionals in the context of CDSS implementations.

Health care organizations increasingly implement clinical decision support systems (CDSSs) due to rising treatment costs and health care professional staff shortages [ 1 , 2 ]. CDSSs provide passive and active referential information, computer-based order sets, reminders, alerts, and patient-specific data to health care professionals at the point of care by matching patient characteristics to a computerized knowledge base [ 1 , 3 , 4 ]. These systems complement existing electronic health record (EHR) systems [ 5 ] and support various functional areas of medical care, such as preventative health, diagnosis, therapy, and medication [ 6 , 7 ]. Research has shown that CDSSs can improve patient safety and quality of care [ 8 , 9 , 10 ] by preventing medication errors and enhancing decision-making quality [ 11 ]. However, despite their potential benefits, their successful implementation into the clinical workflow remains low [ 1 , 12 ]. To facilitate CDSS acceptance and minimize user resistance, it is crucial to understand the factors affecting implementation success and identify the sources of resistance among the users [ 1 , 13 , 14 ].

In the health care innovation management and implementation science literature, a range of theoretical approaches have been used to examine the implementation and diffusion of health care information technologies. Technology acceptance theories focus on key determinants of individual technology adoption, such as ease of use , perceived usefulness or performance expectancy of the technology itself [ 15 , 16 , 17 ]. Organizational theories emphasize the importance of moving beyond an exclusive focus on the acceptance of technology by individuals. Instead, they advocate for examining behaviors and decisions with a focus on organizational structures and processes, cultural and professional norms, and social and political factors such as policies, laws, and regulations [ 18 , 19 ]. Other studies analyze the implementation of new technologies in health care from a behavioral theory perspective [ 20 ] and propose frameworks to explain how and why resistances emerge among users, which may have cognitive, affective, social, or environmental origins [ 13 , 21 , 22 ]. For example, the Theoretical Domains Framework has been applied to the behavior of health care professionals and serve as the basis for studies identifying influences on the implementation of new medical technologies, processes, or guidelines [ 21 , 23 ]. Other, more holistic, implementation frameworks, such as the Nonadoption, Abandonment, Scale-up, Spread and Sustainability framework , identify determinants as part of a complex system to facilitate CDSS implementation efforts across health care settings [ 13 ].

However, these theoretical approaches do not sufficiently take into account the unique organizational and social system in hospitals, which is characterized by strong hierarchies and the socialization of physicians into isolated structures and processes, making CDSS implementation particularly difficult [ 5 , 24 , 25 ]. Health care professionals are considered to have an entrenched professional identity characterized by the acquisition of a high level of expertise and knowledge over a long period of time, as well as by their decision-making authority and autonomy in clinical interventions. Defined roles and structures of different professional groups in medical organizations help to manage the multitude of tasks under high time pressure [ 26 ]. In addition, heath care professionals bear a high degree of responsibility in terms of ensuring medical quality and patient well-being [ 27 ]. Changing their professional identity is particularly difficult as they work in organizational contexts with high levels of inertia and long-lived core values based on established practices and routines [ 27 ]. This resilience of health care professionals’ identity makes it particularly difficult to implement new technologies into everyday medical practice [ 28 ].

By integrating existing evidence into an individual physician’s decision-making processes, CDSSs carry the disruptive potential to undermine existing, highly formalized clinical knowledge and expertise and professional decision-making autonomy [ 5 , 24 , 29 , 30 ]. Research has shown that health professionals may perceive new technologies, such as CDSSs, as a threat to their professional identity and draw potential consequences for themselves and their professional community, such as the change of established organizational hierarchies, loss of control, power, status, and prestige [ 31 , 32 , 33 ]. Nevertheless, other studies have shown that health professionals view CDSSs as tools that increase their autonomy over clinical decisions and improve their relationship with patients [ 34 , 35 ]. In addition, these consequences may vary widely by country, professional status, and medical setting. As a result, the use and efficacy of CDSSs differ around the world [ 24 ]. We therefore suggest that a better understanding of the identity-undermining or identity-enhancing consequences of CDSSs is needed. Despite growing academic interest, there is surprisingly scant research on the role of perceived identity threats and enhancements across different professional hierarchies during CDSS implementation and how they relate to other human, technological, and organizational influencing factors [ 5 , 36 , 37 ].

Therefore, the purpose of this narrative review is to analyze the state of knowledge on the individual, technological, and organizational circumstances that lead various health professionals to perceive CDSSs as a threat or enhancement of their professional identity. In doing so, this study takes an exploratory approach and determines human , organizational , and technological factors for the successful implementation of CDSSs. Our study extends the current knowledge of CDSS implementation by deconstructing professional identity related mechanisms and identifying the antecedents of these perceived threats and enhancements. It addresses calls for research to explore identity theory and social evaluations in the context of new system implementation [ 5 , 38 , 39 ] by aiming to answer the following research questions: What are the human, technological, and organizational factors that lead different health care professionals to perceive a CDSS as a threat or an enhancement of their professional identity? And, how do perceptions of threat and enhancement of professional identity influence CDSS implementation?

This study is designed to guide medical practice, health IT providers, and health policy in their understanding of the mechanisms that lead to conflicts between health professionals’ identity and CDSS implementation. It is intended to identify practices that may support the implementation and long-term use of CDSSs. By narratively merging insights and underlying concepts from existing literature on innovation management, implementation science, and identity theory with the findings of the empirical studies included in this review, we aim to provide a comprehensive framework that can effectively guide further research on the implementation of CDSSs.

Understanding professional identity

Following recent literature, professional identity refers to an individual’s self-perception and experiences as a member of a profession and plays a central role in how professionals interpret and act in their work situations [ 25 , 37 , 40 , 41 , 42 ]. It is closely tied to a sense of belonging to a professional group and the identification with the roles and responsibilities associated with that occupation. Professionals typically adhere to a set of ethical principles and values that are integral to their professional identity and guide their behavior and decision-making. They are expected to have specialized knowledge and expertise in their field. In return, they are granted a high degree of self-efficacy, autonomy, and ability to act in carrying out these tasks [ 25 , 43 ]. In addition, professionals make active use of their identities in order to define and change situations. Self-continuity and self-esteem encourages these professionals to align their standards of identification with the perceptions of others and themselves [ 44 ]. Many professions have formal organizations or associations that promote and regulate their shared professional identity [ 45 ]. Membership in these associations, adherence to their standards and to a shared culture within their field, including common rituals, practices, and traditions, may reinforce their professional identity [ 33 , 36 , 45 ].

Studies in the field of health care innovation management and implementation science reported a number of professional identity conflicts that shape individual behavioral responses to change and innovation [ 5 , 24 , 33 , 36 , 45 , 46 ]. The first set of conflicts relates to individual factors and expectations, such as their personality traits, cognitive style, demographics, and education. For example, user perception of a new technology can be influenced by professional self-efficacy, which can be described as perceived feeling of competence, control and ability to perform [ 47 ]. Studies have shown that innovations with a negative impact on individual’s sense of efficacy tend to be perceived as threatening, resulting in a lower likelihood of successful implementation. Users who do not believe in their ability to use the new system felt uncomfortable and unconfident in the workplace and were more likely to resist the new system [ 48 , 49 ].

The second set of studies relates professional identity to sense-making, which involves the active process of acquiring knowledge and comprehending change based on existing professional identities as frames of references [ 50 ]. For example, Jensen and Aanestad [ 51 ] showed that health care professionals endorsed the implementation of an EHR system only if it was perceived to be congruent with their own role and the physician’s practice, rather than focusing on functional improvements that the system could have provided. Bernardi and Exworthy [ 52 ] found that health care professionals with hybrid roles, bearing both clinical and managerial responsibilities, use their social position to convince health care professionals to adopt medical technologies only when they address the concerns of health care professionals.

The final set of studies address struggles related to a disruption of structures and processes that lead to the reorganization of the health professions [ 53 , 54 ] and the introduction of new professional logics [ 55 ]. These can result in threat perceptions from the perspective of health professionals regarding their competence, autonomy, and control over clinical decisions and outcomes. Accordingly, the perception of new systems not only influences their use or non-use, but implies a dynamic interaction with the professional identity of the users [ 56 ]. CDSSs may be perceived as deskilling or as a skill enhancement by reducing or empowering the responsibilities of users and thereby as compromising or enhancing the professional role, autonomy and status.

Taking the classical theoretical frameworks for the evaluation of health information systems [ 57 ] and this understanding of professional identity as a starting point, our narrative review identifies, reinterprets, and interconnects the key factors to CDSS implementation related to threats or enhancement of health professionals’ identity in different health care settings.

We conducted a comprehensive search of the Web of Science and PubMed databases to identify peer-reviewed studies on CDSS implementations published between January 2010 and September 2023. An initial review of the literature, including previous related literature reviews, yielded the key terms to be used in designing the search strings [ 1 , 49 ]. We searched for English articles whose titles, abstracts, or keywords contained at least one of the search terms, such as “clinical decision support system,” “computer physician order entry,” “electronic prescribing,” or “expert system.” To ensure that the identified studies relate to CDSS implementation, usage, or adoption from the perspective of health care organizations and health care professionals, we included, for example, the words “hospital,” “clinic,” “medical,” and “health.” The final search strings are provided in Table S 1 (Additional file 1). We obtained a total of 6212 articles. From this initial list, we removed 1461 duplicates, 6 non-retrievable studies, and 1 non-English articles. This left us with a total of 4744 articles for the screening of the titles, abstracts, and full texts. Three authors independently reviewed these articles to identify empirical papers which met the following inclusion criteria: (a) evaluated a CDSS as a study object, (b) examined facilitating factors or barriers impacting either CDSS adoption, use or implementation, (c) were examined from the perspective of health care professionals or medical facilities, and (d) represented an empirical study. We identified 220 studies that met our inclusion criteria. The three authors independently assessed the methodological quality of these 220 selected studies using the Mixed Methods Appraisal tool (MMAT), version 2018 [ 58 ]. The MMAT can be used for the qualitative evaluation of five different study designs, i.e., qualitative, quantitative, and mixed methods approaches. It is a qualitative scale that evaluates the aim of a study, its adequacy to the research question, the methodology used, the study design, participant recruitment, data collection, data analysis, presentation of findings, and the discussion and conclusion sections of the article [ 59 ]. One hundred thirty-one studies were included in the review after excluding studies based on the MMAT criteria, primarily due to a lack of a defined research question or a mismatch between the research question and the data collected [ 58 ]. Any disagreement about the inclusion of a publication between was resolved through internal discussion. Figure  1 summarizes our complete screening process.

figure 1

Overview of article screening process

The studies included in the review were then subject to a qualitative content analysis procedure [ 60 , 61 ] using MAXQDA, version 2020. For data analysis, we initially followed the principle of “open coding” [ 62 ]. We divided the studies equally among the three authors, and through an initial, first-order exploratory analysis, we identified numerous codes, which were labeled with key terms from the studies. Based on a preliminary literature review, we then developed a reference guide with the main categories of classic theoretical frameworks for health information systems implementation (human, technology, organization) [ 57 ] and further characteristics of the study. Second-order categories were obtained through axial coding [ 62 ], which reduced the number of initial codes but also revealed concepts that could not be mapped to these three categories (i.e., perceived threat to professional autonomy and control). This allowed us to identify concepts related to professional identity. Subsequently, a subset of 10% of the studies was randomly selected and coded by a second coder independently of the first coder [ 63 ]. Then, an inter-coder reliability analysis was performed between the samples of coder 1 and coder 2. For this purpose, Cohen’s kappa, a measure of agreement between two independent categorical samples, was calculated. Cohen’s kappa showed that there was a high agreement in coding ( k  = 0.8) [ 64 ]. We coded for the following aspects: human, organizational, technological, professional identity factor conceptualizations, dependent variables, study type and type of data, time-frame, clinician type sample, description of the CDSS, implementation phase [ 65 ], target area of medical care [ 7 ], and applied medical specialty. Tables 2 , 3 , 4 , 5 , 6  and 7 and Table S 2 provide detailed data as per the key coding categories.

Descriptive analysis

A total of 131 studies were included in our review. In line with recent reviews of CDSS implementation research [ 6 , 14 , 57 ], the reviewed articles are distributed widely across journals (Table  1 ).

The examined articles were drawn from 69 journals, 55 of which provide only one article. The BMC Medical Informatics and Decision Making and International Journal of Medical Informatics published nearly a third of the included studies, with 67 articles overall in medical informatics journals. There are additional clusters in medical specialty-related (33), health services, public health, or health care management-related (12), and implementation science-related (2) journals. The journals’ 5-year impact factor measured in 2022 ranged between 2.9 and 9.7. Of our included articles, 67 were published between 2010 and 2016, while 64 were published between 2017 and 2023.

The review includes a mixture of qualitative ( n  = 61), quantitative ( n  = 40), and mixed methods ( n  = 30) studies. Unless otherwise noted, studies indicated as qualitative studies in Table S 2 involved interviews and quantitative studies involved surveys. Interviews with individual health care professionals were the most common data collection method used ( n  = 38), followed by surveys ( n  = 58), and focus group interviews ( n  = 25). Most of the interviews were conducted with physicians ( n  = 60) and nursing professionals ( n  = 23). The studies were performed at various sites and specialties, with primary care settings ( n  = 35), emergency ( n  = 11), and pediatric ( n  = 6) departments being represented most frequently. Forty-five articles researched exclusively physicians and 10 covered nurse practitioners as respondents in their sample. Four studies surveyed pharmacists, one study surveyed medical residents as a single target group, and 20 articles included clinical leaders in addition to clinicians to their sample. Twenty-eight studies were longitudinal, although studying system implementation at one point in time will insufficiently explain the expected impact of the novel system on, e.g., the organizational performance outcomes over time [ 67 ]. The studies collected data in 29 different countries, with the most common being the USA ( n  = 41), the UK ( n  = 18), and the Netherlands ( n  = 11).

Included studies were additionally coded according to the implementation phase in which the study was conducted (i.e., exploration, adoption/preparation, implementation, sustainment phase) [ 65 ]. In 43 of the included studies, the analysis was conducted during the exploration phase, i.e., during a clinical trial or an exploration of the functionality and applicability of a CDSS. Nineteen studies were conducted in the active implementation phase, 15 studies in an implementation adoption or preparation phase, and 46 studies in a sustainment phase (i.e., implementation completed and long-term system use). The revealing studies involved an investigation in multiple implementation phases.

Following Berner’s study [ 7 ], we classified the examined CDSSs of the included studies according to specific target areas of care. As such, in 93 articles, CDSSs for planning or implementing treatment were studied. Thirty-seven studies examined CDSSs whose goal was prevention or preventive care screening. In 31 studies, the functional focus of the CDSSs was to provide specific suggestions for potential diagnoses that match a patient’s symptoms. Seventeen CDSSs of the included studies focused on follow-up management , 15 studies studied CDSSs for hospital and provider efficiency care plans and 12 focused on cost reduction and improved patient convenience (i.e., through duplicate testing alerts). Most CDSSs supported medication-related decisions and processes, such as prescribing, administration, and monitoring for effectiveness and adverse effects ( n  = 30). An overview of the characteristics of the included studies can be found in Table S 2 .

In the 131 included studies, we identified 1219 factors, which we categorized into human, technological, organizational, and professional identity threat and enhancement-related factors to implementation (Table  2 ). The total amount of factors is reported in Table  2 for each of our framework’s dimension and for each of our inferred factor sub-categories. The following section delves into the elements of our framework (Fig.  1 ), starting with the most commonly identified factors. Finally, the CDSS implementation outcomes are described.

Technological factors

At the technological level, perceptions of threat to professional identity were associated with factors related to the nature of the clinical purpose of the CDSS and system quality, such as compatibility of the CDSS with current clinical workflows [ 68 , 69 , 70 ], customization flexibility, intuitive navigation [ 71 , 72 , 126 ], and scientific evidence and transparency of the decision-outcome [ 73 , 74 , 191 ] . A total of 532 technological factors in 125 included studies were identified. In 21 studies, technological factors were related to study participants’ perceptions of professional identity threat, while in 9 studies these factors were related to perceived professional identity enhancements (Table  3 ). The exemplary quotes are chosen based on their clarity and representativeness related to the overall themes.

The reviewed studies focused primarily on medication-oriented CDSSs. Relevance, accuracy, and transparency of the recommendations’ quality and scientific evidence were found to be crucial for their acceptance and use. “ Irrelevant, inaccurate, excessive, and misleading alerts ” were associated with alert fatigue and lack of trust [ 72 , 75 , 76 , 127 , 144 ]. Some senior physicians preferred the provision of evidence-based guidelines that would reinforce their knowledge, while others advised junior physicians to override the CDSS recommendations in favor of their own instructions. However, residents tended to follow CDSS recommendations and used them to enhance their confidence about a clinical decision [ 69 , 77 , 128 ]. Physicians had diverse perceptions of the scientific evidence supporting the CDSS recommendations. Some regarded it as abstract or useless information that was not applicable to clinical decision making in practice. These physicians preferred a more conventional approach to learning from the “eminences” of their discipline while pragmatically engaging in the “art and craft” of medicine. CDSSs were perceived as increasingly undermining clinical work and expertise among health professionals [ 24 ]. In some studies examining AI (artificial intelligence)-based CDSS, explainability and transparency of the CDSS recommendations played a major role in maintaining control over the therapeutic process [ 78 , 129 ].

Many studies indicated that the introduction of a CDSS was perceived as a disruptive change to established clinical workflows and practices [ 12 , 79 , 80 , 81 , 167 ]. The fit of CDSS with standardized clinical workflows was seen as critical to the CDSS implementation. Senior clinicians preferred their own workflows and protocols for complex patient cases [ 82 ]. Geriatricians, for example, considered CDSS recommendations inappropriate for their clinical workflows because geriatric patients are typically multi-morbid and require individualized care [ 77 ]. Intuitiveness and interactivity of the CDSS were found to reduce the perceived threat to professional identity [ 5 ], and customization and adjustment of alerts based on specialties’ and individual preferences were perceived to increase competence [ 10 , 127 , 130 ]. Physicians considered that successful implementation of the CDSS depends on the integration of existing clinical processes and routine activities and requires collaboration as well as knowledge sharing among experienced professionals [ 24 ].

Organizational factors

A total of 287 organizational factors in 104 included studies were identified. In 17 studies, organizational factors were related to study participants’ perceptions of professional identity threat, while in 7 studies these factors were related to perceived professional identity enhancements (Table  4 ). In the included studies, organizational factors influencing professionals’ perceived threat to their identity have been studied from multiple perspectives, such as internal collaboration and communication [ 145 , 178 ], (top) managers’ leadership and support [ 79 , 83 ], innovation culture and psychological safety [ 24 ], organizational silos and hierarchical boundaries [ 69 , 70 ], and the relevance of social norms and endorsement of professional peers [ 161 ].

The empirical studies showed that the innovation culture plays a critical role in driving change in health care organizations. In this regard, resistance to the implementation of CDSSs may be due to a lack of organizational support as well as physicians’ desire to maintain the status quo in health care delivery [ 24 , 70 , 75 ]. Several key factors influenced the implementation in this regard. These included appropriate timing of the implementation project, user involvement, and dissemination of understandable information through appropriate communication channels [ 70 ]. Some studies showed that an innovation culture characterized by interdependence and cooperation promotes social interaction (i.e., a psychologically safe environment ), which in turn facilitates problem-solving and learning related to CDSS use [ 193 , 194 ]. For example, nursing practitioners recognized the potential of CDSSs for collaboration in complex cases, which had a positive impact on team and organizational culture development [ 24 ].

S upportive leadership (e.g., by department leaders) was found to be critical to successful CDSS implementation. This includes providing the necessary resources, such as time and space for training, technical support, and user involvement in the implementation process, which were negatively associated with perceived loss of control and autonomy [ 11 , 69 , 79 , 83 , 84 , 145 , 174 ]. Involving not only senior physicians but also nursing and paramedical leaders increased the legitimacy of CDSSs throughout the professional hierarchy and helped to overcome the negative effect of low status on psychological safety by flattening hierarchical distances [ 24 , 70 , 72 ]. In contrast, imposing a CDSS on users, led to resistance. Some physicians and nurses felt that the use of the CDSS was not under their voluntary control (i.e., “we have no choice”, “it’s not an option to not use it”) because these systems have become “as essential as … carrying a pen and a stethoscope,” with physicians feeling that they now “are reliant on the CDSS” [ 10 ]. In other cases, top-down decisions led to the resolution of initial resistance toward the CDSS [ 167 ]. Overall, committed leadership that involved users and transcended professional silos and hierarchies was critical to successful CDSS implementation. In this context, an established hierarchy and culture of physician autonomy impeded communication, collaboration, and learning across professional and disciplinary boundaries [ 54 , 195 , 196 ]. A well-designed CDSS minimized professional boundaries by, for example, empowering nurses and paramedics to make independent treatment decisions [ 8 , 180 ]. CDSSs thus provided structured means for nonmedical professionals to receive support in their clinical decision-making that was otherwise reserved for professionals with higher authority [ 34 ]. Since CDSSs allow widespread access to scientific evidence, they often led to nursing practitioners’ control or oversight of medical decisions, putting junior physicians in an inferior position, and thus providing an occasion to renegotiate professional boundaries and to dispute the distribution of power [ 24 , 77 ].

In addition, the provision of sufficient training and technical support were essential to ensure that physicians and nursing practitioners felt confident in using the CDSS and increased their satisfaction with the system [ 77 , 85 ]. Embedding new CDSSs into routine practice required communication and collaboration among professionals with clinical expertise and those with IT expertise [ 86 , 145 , 178 ]. Involving physicians and nursing practitioners in decision-making processes increased their willingness to change their long-standing practice patterns and embrace the newly introduced CDSS [ 5 , 10 ]. Facilitating the CDSS uptake therefore required legitimization of the system’s designers and exploited data sources [ 24 ]. Similarly, the success or failure of CDSSs implementation depended on the ability of the new system to align with existing clinical processes and routine activities. Often, successful adoption was at risk when the implementation was too far away from the reality of clinical practice because those responsible for designing the CDSS poorly understood the rationale for designing the system in a particular way [ 145 ].

In addition, some studies indicated that resistance was overcome by communicating the benefits of the CDSS through contextual activities and providing opportunities to experience the system firsthand. Sharing positive implementation experiences and fostering discussions among actual and potential users could bridge the gap between perceptions and actual use [ 145 , 146 ]. In this regard, endorsement from “ respected ” and “ passionate ” internal change promoters , such as expert peers, was seen as key to overcoming user resistance [ 82 ]. Confirmation from clinical experts that the new system improves efficiency and quality of care was essential for the general system acceptance [ 154 ]. Thus, social influence played an important role, especially in the initial phase of system use, while this influence decreased as users gained experience with the CDSS [ 182 ].

Human factors

A total of 197 human factors in 99 included studies were identified. In 17 studies, human factors were related to study participants’ perceptions of professional identity threat, while in 6 studies these factors were related to perceived professional identity enhancements. Table 5 summarizes the key findings from the included articles, which relate to three factors: individual attitudes and emotional responses, experience and familiarization with the CDSS , and trust in the CDSS and its underlying source.

It is reported in the empirical studies that physicians often failed to fully utilize the features of CDSSs, such as protocols, reminders, and charting templates, because they often lacked experience and familiarization with the CDSS [ 3 , 79 , 87 , 127 ]. In addition to insufficient training and time constraints, limited IT skills were reported as the main reasons [ 83 , 87 , 147 , 185 ]. As a result, users interacted with the CDSS in unintended ways, leading to data entry errors and potential security concerns [ 88 ]. According to Mozaffar et al. [ 131 ], this includes physicians’ tendency to enter incorrect data or select the wrong medication due to misleading data presentations in the system. Inadequate IT skills and lack of user training also contributed to limited understanding of the full functionality of CDSSs. As such, physicians interviewed in one study expressed the lack of knowledge about basic features of a CDSS, including alerts, feedback, and customization options, as a major implementation barrier [ 127 ]. Some studies reported that the lack of system customization to meet the personal preferences of users and the lack of system training weakened their confidence in the system and compromised their clinical decision-making autonomy [ 10 , 83 , 89 , 90 , 127 , 183 ].

Some studies indicated that there were trust issues among physicians and nursing practitioners regarding the credibility of the decision-making outcome [ 132 , 154 ], the accuracy of the CDSS recommendations’ algorithm [ 146 ], and the timeliness of medical guidelines in the CDSS [ 127 ]. Seniors appreciated medication-related alerts but felt that their own decision-making autonomy regarding drug selection and dosing was compromised by the CDSS [ 74 ]. However, they tended to use the CDSS as a teaching tool for their junior colleagues, advising them to consult it when in doubt [ 77 , 128 ]. In some cases, this led to junior physicians accepting CDSS suggestions, such as computer-generated dosages, without independent verification [ 128 , 144 , 154 ].

Several studies indicated that the CDSS introduction elicited different individual attitudes and emotional responses . More tenured health care professionals were “ frightened ” when confronted with a new CDSS. Others perceived the CDSS as a “ necessary evil ” or “ unwelcome disruption ” [ 81 ], leading to skepticism, despair, and anxiety [ 3 , 145 , 167 ]. Younger physicians, on the other hand, tended to be “ thrilled ” and embraced the technology’s benefits [ 84 , 147 , 167 ]. Motivation, enthusiasm, and a “can do” attitude toward learning orientation and skill development positively influenced engagement in CDSS [ 11 , 83 , 84 , 145 , 184 ].

The role of professional identity threat and enhancement perceptions in CDSS implementation

Overall, we found 90 factors in 65 included studies related to perceptions of professional identity threat among the study participants. Forty-four factors in 34 included studies were associated with perceived professional identity enhancements. We identified three key dimensions of professional identity threat and enhancement perceptions among health care professionals impacting CDSS implementation along different implementation phases [ 197 ]. Table 6 contains exemplary quotes illustrating the findings.

A number of physicians perceived CDSSs as an ultimate threat to professional control and autonomy , leading to a potential deterioration of professional clinical judgment [ 30 , 69 , 77 , 154 , 155 ]. Most nurse practitioners, on the other hand, experienced a shift in decision-making power, providing an occasion to renegotiate professional boundaries in favor of health care professionals with lower levels of expertise [ 24 ]. Thus, nurses associated the implementation of a CDSS with enhanced professional control and autonomy in the performance of tasks [ 34 , 155 , 169 ]. Pharmacists often advocated for medication-related CDSSs, which in turn increased physician dependency and resistance to new tasks [ 12 , 84 , 178 ]. The latter was a consequence of physicians’ increasing reliance on pharmacists for complex drug therapies, as physicians had to relinquish some decision-making authority to pharmacists by restructuring of decision-making processes [ 74 ].

Senior physicians frequently expressed concerns about overreliance on CDSS and potential erosion of expertise , which they believed led to patient safety risks [ 10 , 24 , 75 , 89 , 155 ]. They complained that overreliance on CDSS recommendations interfered with their cognition processes. For example, in medication-related CDSSs, clinical data such as treatment duration, units of measure, or usual doses are often based on pharmacy defaults that may not be appropriate for certain patients. According to these physicians, their junior colleagues might not double-check recommended medication doses and treatment activities, leading to increased patient safety risk [ 131 ]. In another study, general practitioners expressed concerns about the deskilling of future physicians through CDSSs. Some CDSSs required a high level of clinical expertise, skill, and knowledge regarding the correct entry of clinical information (e.g., symptoms) for proper support in clinical decisions. Many physicians feared that the use of CDSSs would erode this knowledge and thus allow the CDSS recommendations to lead to incorrect decisions [ 30 ]. This potential loss of skills and expertise was seen as particularly problematic in situations where decision support for medications and e-prescriptions varied from facility to facility. Physicians working at different institutions who relied on the CDSS for medication treatment support used at one institution reported that they had difficulties making the correct clinical decisions at the other institution [ 154 ]. From the reviewed articles, it appeared that senior physicians perceived CDSSs as an intrusion into their professional role and object to their expertise and time being misused for “ data entry work ” [ 10 ]. They enjoyed the freedom to decide what to prescribe, when to prescribe it, and whether or not to receive more information about it [ 77 ] and were determined not to “ surrender ” and “ be made to use [the CDSS] ” [ 82 ].

In line with the increasing dependence of physicians on pharmacists when using CDSS for medication treatment, pharmacists used the CDSS to demonstrate their professional skills and to further develop their professional role [ 178 ]. Nurse practitioners were empowered by CDSSs guidance to systematically update medications and measurements during their hectic daily clinic routine [ 24 , 91 ], to independently manage more complicated scenarios [ 8 ], and to facilitate their decision-making [ 92 ]. Some physicians stated that CDSS recommendations facilitated their critical thinking to critically reflect on the medication more than usual and facilitated more conscious decisions [ 133 ]. Increased professional identity enhancement in terms of skills and expertise were thus often associated with technological factors such as enhanced patient safety, improved efficiency, and quality of care [ 9 ].

Furthermore, physicians strongly associated their professional identity with their central role in the quality of patient care based on a high level of empathy and trust between physician and patient [ 45 , 195 ]. Their perceived threat to professional identity lead to a sense of loss in clinical professionalism and control over patient relationships [ 162 , 170 ] . CDSS usage was perceived as unprofessional or disrupting to the power dynamic between them and their patients [ 89 , 93 , 171 ]. As a result, they indicated that established personal patient relationships were affected by imposed CDSS use [ 81 ]. Other physicians saw CDSSs as having potential to enhance patient relationships providing them with more control over the system and treatment time, facilitating information and knowledge sharing with patients and building trust between patients and physicians [ 35 , 94 ].

Mapping the perceptions of threat and enhancement of professional identity among physicians and other health care professionals identified in each study to implementation phases allowed for an examination of the evolution of identity perceptions in CDSS implementations. Table 7 assigns the identity perceptions among physicians and other health care professionals to the different implementation phases. The findings illustrate that threat perceptions were predominantly perceived before and at the beginning of implementation. With steady training, use and familiarization with the CDSS, the perceived threat to professional identity slightly decreased in the sustainment phase, compared to the pre-implementation phase, while perceptions of enhancement of professional identity increased. During the exploration phase, physicians in particular perceived the CDSS as undermining their professional identity, and this perception remained relatively constant through the sustainment phase. Other health care professionals, such as nurse practitioners and pharmacists often changed their perspective over the course of the implementation phases and perceived the CDSS as supporting their control, autonomy, and skill enhancement at work.

CDSS implementation outcomes

In total, we identified 93 benefits related to CDSS implementation in the reviewed studies (Table  2 ). The most commonly evaluated benefits were improvements in work efficiency and effectiveness through the use of CDSSs, improvements in patient safety, and improvements in the quality of care . Prevention of prescription and treatment errors was also frequently mentioned. The included studies measured CDSS implementation in various ways, which we classified into seven groups (Table  8 ). Most studies measured or evaluated self-reported interest in using the system or intention, willingness to use, or adoption , followed by self-reported attitude toward CDSSs , and both self-reported and objective measure of implementation success . Objective actual use measurement was evaluated in only 10 studies, while self-reported use was measured in seven studies, and self-reported satisfaction and performance of the system was measured in five studies. Both self-reported and objective measure of usefulness and usability was measured in one study.

Although we included 40 quantitative studies in our review, only a few of these empirically measured the direct effect of professional identity threat or related organizational consequences on implementation, adoption, or use of CDSSs. Two studies empirically demonstrated a direct significant negative relationship between perceived professional autonomy and intention to CDSS use [ 5 , 48 ]. Another four studies found empirical evidence of an indirect negative association between threats to professional identity and actual CDSS use. Physicians disagreed with the CDSS recommendation because they perceived insufficient control and autonomy over clinical decision making [ 79 , 88 ] and lacked confidence in the quality of the CDSS and its scientific evidence [ 154 ].

Main findings

The purpose of this narrative review was to identify, reinterpret, and interconnect existing empirical evidence to highlight individual, technological, and organizational factors that contribute to professional identity threat and enhancement perceptions among clinicians and its implications for CDSS implementation in health care organizations. Using evidence from 131 reviewed empirical studies, we develop a framework for the engagement of health care professionals by deconstructing the antecedents of professional identity threats and enhancements (Fig. 2 ). Our proposed framework highlights the role of cognitive perceptions and response mechanisms due to professional identity struggles or reinforcements of different individual health care professionals in the implementation of CDSSs. Our work therefore contributes to the growing literature on perceived identity deteriorations with insights into how knowledge-intensive organizations may cope with these threats [ 37 , 45 , 46 ]. We categorized clinicians’ professional identity perceptions into three dimensions: (1) perceived threat and enhancement of professional control and autonomy , (2) perceived threat and enhancement of professional skills and expertise , and (3) perceived loss and gain of control over patient relationships . These dimensions influenced CDSS implementation depending on the end user’s change of status and expertise over the course of different implementation phases. While senior physicians tended to perceive CDSSs as undermining their professional identity across all implementation stages, nurse practitioners, pharmacists, and junior physicians increasingly perceived CDSS as enhancing their control, autonomy, and clinical expertise. Physicians, on the other hand, were positive about the support provided by the CDSS in terms of better control of the physician–patient relationship. In most studies, professional identity incongruence was associated with technological factors, particularly the lack of adaption of the system to existing clinical workflows and organizational structures (i.e., process routines), and the fact that CDSS functionalities have to meet the needs of users. The lack or presence of system usability and intuitive workflow design were also frequently associated as antecedents of professional identity loss. The other dimensions (i.e., human and organizational factors) were encountered less often in relation to professional identity mechanisms among health care professionals. Only six studies found empirical evidence of an indirect or direct negative relationship between health professionals’ perceived threats to professional identity and outcomes of CDSS implementation, whereas no study explicitly analyzed the relationship between dimensions of professional identity enhancement and outcomes of CDSS adoption and implementation.

figure 2

A framework for the role of professional identity in CDSS implementation

Interpretations, implications and applicability to implementation strategies

The results indicate that healthcare professionals may perceive CDSSs as valuable tools for their daily clinical decision-making, which can improve their competence, autonomy, and control over the relationship with the patient and their course of treatment. These benefits are realized when the system is optimally integrated into the clinical workflow, meets users’ needs, and delivers high quality results. Involving users in design processes, usability testing, and pre-implementation training and monitoring can increase user confidence and trust in the system early in implementation and lead to greater adoption of the CDSS [ 146 ]. To address trust issues in the underlying algorithm of the CDSS, direct and open communication, transparency in decision-making values, and clinical evidence validation of the CDSS are crucial [ 154 ]. CDSS reminders and alerts should be designed to be unobtrusive to minimize the perceived loss of autonomy over clinical decisions [ 77 ].

Contrary, the implementation of a CDSS often lead to substantial changes of professional identity and thereby often associated with fear and anxiety. A sense of a loss of autonomy and control was linked to lower adoption rates and thus implementation failure. Cognitive styles, which may be expressed in emotional reactions of users toward the CDSS, reinforced reluctance to implement and use the system [ 145 , 167 ]. This underscores the importance of finding expert peers and professionals who are motivated and positive toward CDSS adoption and use, and who can communicate and promote the professional appropriateness and benefits of the CDSS to their colleagues [ 82 , 83 , 184 ]. This promotes a focus on the improvement and benefits of the CDSS while maintaining the integrity, perceived autonomy, control, and expertise of physicians and nurses.

Accordingly, the included studies show that health professionals respond to the professional identity threat triggered by the CDSS implementation by actively maintaining, claiming, or completely changing their identity [ 39 ], which is consistent with previous studies elaborating on the self-verification of professionals [ 44 ]. For example, physicians delegated routine tasks to other actors to maintain control over the delivery of services and thereby enhance their professional status [ 201 ]. Pharmacists used the introduction of CDSS for drug treatment to demonstrate their skills to physicians and to further develop their professional role [ 178 ]. Maintaining authority over the clinical workflow without the need for additional relational work with lower-status professionals was seen as one of the main factors for health care professionals’ CDSS acceptance in our findings [ 10 , 12 , 84 , 178 ]. Physicians influence change processes, such as the implementation of CDSS, in a way that preserves the status quo of physicians’ responsibilities and practices. They often stated their objective to avoid increasing dependence on lower-status professionals such as nurses or pharmacists who were gaining control by using the new CDSS. In addition, CDSS users frequently criticized the system’s lack of fit with clinical work processes and that the systems were not able to replace the clinical expertise and knowledge [ 12 , 34 , 77 , 82 ]. The loss of control over the patient-physician relationship also represented a key component of identity undermining through the introduction of CDSSs. Many physicians expressed that their trust-building interaction with patients was eroded by the functionalities of the CDSS [ 81 , 170 ]. The fact that the use of CDSSs saves time in patient therapy and treatment, freeing up time for their patients, was rarely expressed [ 12 , 147 ]. This underscores the need to cope with the physician’s strong identification with their professional role, their tendency to preserve the status quo, and self-defense against technological change during the implementation of CDSSs.

Furthermore, the reviewed studies emphasized the importance of both inter- and intra-professional involvement, collaboration, and communication in health care organizations, during the CDSS implementation, suggesting that these mechanisms influence the extent and quality of cooperative behavior, psychologically safe environments, and role adaptation of different professional groups [ 26 , 54 , 55 , 202 ]. Among the studies we reviewed, managerial support and collaboration influenced coordination during CDSS implementation [ 82 , 83 , 174 ], such as by providing usability testing and time for efforts to change the understanding of why and how health care professionals should modify their routine practices [ 74 , 95 ].

Overall, the review shows that the consideration of perceived professional identity mechanisms among health care professionals plays an important role when implementing new CDSSs in health care organizations. Additionally, perceived threats and enhancements of professional identity should be considered and regularly assessed in long-term oriented implementation strategies. These strategies often include methods or techniques to improve the adoption, implementation, and sustainability of a clinical program or practices [ 203 ] and may span from planning (i.e., conducting a local needs assessment, developing a formal implementation plan) to educating (i.e., conduct educational meetings, distribute educational materials) to restructuring professional roles to managing quality (i.e., provide clinical supervision, audit, and feedback) [ 204 , 205 ]. To ensure implementation, health care professionals of all hierarchies should be involved in the planning and decision-making processes related to CDSS implementation. Continuous feedback loops between health care professionals, IT staff, and implementation managers can help identify unforeseen threats to professional identity and necessary adjustments to the implementation plan. The review found that perceived identity threats particularly need to be addressed among highly specialized physicians to account for their knowledge-intensive skills, expertise, and clinical workflows [ 24 , 96 ]. In addition, the purpose of CDSS implementation and information about how it aligns with organizational strategic goals and individual professional development should be clearly and continuously communicated at all stages of implementation.

Our review also confirms that health care professionals’ perceptions of the effectiveness of CDSSs reinforce the impact of organizational readiness for the ongoing and required transformation of healthcare [ 17 ]. Comprehensive assessments of the suitability of the system for established or changing clinical workflows and the technical quality of the CDSS should be prioritized at the beginning of the implementation. Training programs should be developed to help professionals adapt to the new medical systems and allay fears of a loss of competence or relevance. To mitigate threats to professional identity in the long term, it is necessary to foster an organizational culture of adaptability, learning, and psychological safety, in which it is acceptable to make mistakes and learn from them. In addition, ongoing leadership support and professional development opportunities are critical to ensure that health care professionals continue to adapt their roles and keep pace with technological developments [ 79 , 84 ].

Limitations

A literature review of a large sample of empirical studies has many advantages [ 206 ]. However, some limitations arise from the study design. First, our included studies were mainly conducted in the USA or UK (see Table S 2 ). The dominance of these two countries may pose a potential bias, as different cultures may have different implications for CDSS implementation and threat perceptions among health care professionals. Therefore, there is a need for caution in generalizing the findings on the impact of human, technological, and organizational factors on professional identity perceptions among professionals across different cultures. More studies are needed to provide a nuanced understanding of professional identity mechanisms among health care professionals across a broader range of cultures and countries.

Second, broad search terms were used to identify a larger number of articles in the literature review and to identify professional identity based on implementation and adoption factors mentioned in the included studies from the perspective of health professionals who were not specifically identified as threats to or enhancements of professional identity. This could also be considered a methodological strength, as this review combines findings from qualitative, quantitative, and mixed methods studies on this construct from a large and diverse field of research on CDSS implementation. However, non-English language articles or articles that did not pass the MMAT assessment may have been overlooked, which would have provided valuable information on further barriers and facilitators (i.e., threats to professional identity in different cultures), affecting the rigor of this study.

Third, most of the studies reviewed captured CDSSs for use in primary care settings. CDSSs in highly specialized specialties or those that frequently treat multi-morbid patients, such as cardiology and geriatrics, require features that allow for detailed workflow customization. In such specialties, even more attention needs to be paid to balancing provider autonomy and workflow standardization [ 97 ]. As such, future research should provide the missing evidence in such complex settings.

Fourth, we were only able to identify a limited number of studies that empirically analyzed the causal relationships included in our framework. There is a lack of studies that use longitudinal research designs, quantitative data, or experimental study designs. Therefore, the identified effects of technological, organizational, and human factors on professional identity and consequently on implementation success need to be interpreted with caution. Future research should test whether the determinants and effects of professional identity mechanisms among healthcare professionals can be observed in real-world settings.

Professional identity threat is a key cognitive state that impedes CDSS implementation among various health care professionals and along all implementation phases [ 31 , 45 ]. Health care managers need to engage in supportive leadership behaviors, communicate the benefits of CDSSs, and leverage supportive organizational practices to mitigate the perception and effect of professional identity threat. An innovation culture needs to support the use of CDSSs and top management commitment should reduce uncertainty about why a new CDSS is needed [ 24 ]. Therefore, leaders should raise awareness of the relevant CDSS functionalities and communicate the terms and conditions of use. It is crucial to involve clinicians in updating CDSS features and developing new ones to ensure that CDSSs can be quickly updated to reflect rapid developments in guideline development [ 195 ]. One way to achieve this is to engage proactive, respected, and passionate individuals who can train colleagues to use the CDSS and promote the potential benefits of the system [ 70 , 82 ].

Our framework presented in this study provides a relevant foundation for further research on the complex relationship between human, technological, and organizational implementation factors and professional identity among different health care professionals. The findings also guide health care management experts and IT system developers in designing new CDSSs and implementation strategies by considering the ingrained norms and cognitions of health care professionals. As suggested above, more research is needed to determine whether some barriers or facilitators are universal across all types of CDSSs or whether there are domain-dependent patterns. In this context, research that explicitly focuses on AI-based CDSSs becomes increasingly important as they become more relevant in medical practice. In fact, five of the studies included in our research, conducted over the last 3 years, examined factors related to the adoption and implementation of AI-based CDSS [ 73 , 74 , 96 , 205 , 206 ]. AI-based CDSSs extend to full automation and can discover new relationships and make predictions based on learned patterns [ 97 ]. However, with their opaque and automated decision-making processes, AI-based systems may increasingly challenge professional identity as they increasingly disrupt traditional practices and hierarchies within healthcare organizations, posing a threat to professional expertise and autonomy [ 156 ]. This may further hinder the implementation and sustainable use of these systems compared to non-AI-based systems. Future research could examine overlaps in barriers and facilitators between CDSSs and AI-based systems, which are of relevance for professional identity threat perceptions among health care professionals, and assess the reasons behind these differences. In addition, translating the findings for different medical contexts may provide valuable insights. This can eventually lead to guidelines for the development of CDSS for different specialties.

Some factors were found less frequently during our analysis; in particular, communication of the benefits of a CDSS to users, the importance of trust across different hierarchies and among staff involved in implementation, and government-level factors related to the environment. While the former factors represent important psychological safety and acceptance of the CDSS, the level of the environment represents a minor role in the perception of professional identity. Future research is needed, however, to determine whether all of these factors play an important role in CDSS implementation. Furthermore, future research could explore the role of middle managers and team managers in health care organizations rather than the role of senior management in managing professional identity threats when leading change. Our narrative review found that clinical middle managers may have a special role in legitimizing CDSSs [ 156 ]. In addition, a future research opportunity arises from the perceived role and identity enhancement through new technologies and their consequences for social evaluation in hierarchical healthcare organizations [ 35 , 132 , 155 ].

Overall, the findings of this review are particularly relevant for managers of CDSS implementation projects. Thoughtful management of professional identity threat factors identified in this review can help overcome barriers and facilitate the implementation of CDSSs. By addressing practical implications and research gaps, future studies can contribute to a deeper understanding of the threat to professional identity and provide evidence for effective implementation strategies of CDSSs and thus for a higher quality and efficiency in the increasingly overburdened health care system.

Availability of data and materials

The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.

Abbreviations

Artificial intelligence

  • Clinical decision support system

Electronic health record

Mixed Methods Appraisal tool

Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. Npj Digit Med. 2020;3:1–10.

Article   Google Scholar  

Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, et al. Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: A systematic review. Appl Sci. 2021;11:5088.

Article   CAS   Google Scholar  

Ash JS, Sittig DF, Wright A, Mcmullen C, Shapiro M, Bunce A, et al. Clinical decision support in small community practice settings: A case study. J Am Med Informatics Assoc. 2011;18:879–82.

Prakash AV, Das S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. Inf Manag. 2021;58:103524.

Esmaeilzadeh P, Sambasivan M, Kumar N, Nezakati H. Adoption of clinical decision support systems in a developing country: Antecedents and outcomes of physician’s threat to perceived professional autonomy. Int J Med Inform. 2015;84:548–60.

Article   PubMed   Google Scholar  

Westerbeek L, Ploegmakers KJ, de Bruijn GJ, Linn AJ, van Weert JCM, Daams JG, et al. Barriers and facilitators influencing medication-related CDSS acceptance according to clinicians: A systematic review. Int J Med Inform. 2021;152:104506.

Berner ES. Clinical decision support systems: state of the art. AHRQ Publication No. 09-0069-EF. Rockville: Agency for Healthcare Research and Quality; 2009.

Usmanova G, Gresh A, Cohen MA, Kim Y, Srivastava A, Joshi CS, et al. Acceptability and barriers to use of the ASMAN provider-facing electronic platform for Peripartum Care in Public Facilities in Madhya Pradesh and Rajasthan, India: a qualitative study using the technology acceptance Model-3. Int J Environ Res Public Health. 2020;17:8333.

Article   PubMed   PubMed Central   Google Scholar  

Singh K, Johnson L, Devarajan R, Shivashankar R, Sharma P, Kondal D, et al. Acceptability of a decision-support electronic health record system and its impact on diabetes care goals in South Asia: a mixed-methods evaluation of the CARRS trial. Diabet Med. 2018;35:1644–54.

Article   CAS   PubMed   Google Scholar  

Holden RJ. Physicians’ beliefs about using EMR and CPOE: In pursuit of a contextualized understanding of health it use behavior. Int J Med Inform. 2010;79:71–80.

Devine EB, Williams EC, Martin DP, Sittig DF, Tarczy-Hornoch P, Payne TH, et al. Prescriber and staff perceptions of an electronic prescribing system in primary care: A qualitative assessment. BMC Med Inform Decis Mak. 2010;10:72.

Shibl R, Lawley M, Debuse J. Factors influencing decision support system acceptance. Decis Support Syst. 2013;54:953–61.

Abell B, Naicker S, Rodwell D, Donovan T, Tariq A, Baysari M, et al. Identifying barriers and facilitators to successful implementation of computerized clinical decision support systems in hospitals: a NASSS framework-informed scoping review. Implement Sci. 2023;18:32.

Kilsdonk E, Peute LW, Jaspers MWM. Factors influencing implementation success of guideline-based clinical decision support systems: A systematic review and gaps analysis. Int J Med Inform. 2017;98:56–64.

Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. 1989:319–40.

Venkatesh V, Davis FD. Theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Manage Sci. 2000;46:186–204.

Söling S, Demirer I, Köberlein-Neu J, Hower KI, Müller BS, Pfaff H, et al. Complex implementation mechanisms in primary care: do physicians’ beliefs about the effectiveness of innovation play a mediating role? Applying a realist inquiry and structural equation modeling approach in a formative evaluation study. BMC Prim Care. 2023;24:1–14.

Birken SA, Bunger AC, Powell BJ, Turner K, Clary AS, Klaman SL, et al. Organizational theory for dissemination and implementation research. Implement Sci. 2017;12:1–15.

Vance Wilson E, Lankton NK. Modeling patients’ acceptance of provider-delivered E-health. J Am Med Informatics Assoc. 2004;11:241–8.

Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:1–18.

Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: The use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005;58:107–12.

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:1–17.

Liberati EG, Ruggiero F, Galuppo L, Gorli M, González-Lorenzo M, Maraldi M, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci. 2017;12:1–13.

Pratt MG, Rockmann KW, Kaufmann JB. Constructing professional identity: The role of work and identity learning cycles in the customization of identity among medical residents. Acad Manag J. 2006;49:235–62.

Beane M, Orlikowski WJ. What difference does a robot make? The material enactment of distributed coordination. Organ Sci. 2015;26:1553–73.

Reay T, Goodrick E, Waldorff SB, Casebeer A. Getting leopards to change their spots: Co-creating a new professional role identity. Acad Manag J. 2017;60(3):1043–70.

Hu PJH, Chau PYK, Liu Sheng OR. Adoption of telemedicine technology by health care organizations: An exploratory study. J Organ Comput Electron Commer. 2002;12:197–221.

Alohali M, Carton F, O’Connor Y. Investigating the antecedents of perceived threats and user resistance to health information technology: a case study of a public hospital. J Decis Syst. 2020;29:27–52.

McParland CR, Cooper MA, Johnston B. Differential Diagnosis Decision Support Systems in Primary and Out-of-Hours Care: A Qualitative Analysis of the Needs of Key Stakeholders in Scotland. J Prim Care Community Heal. 2019;10:57–61.

Google Scholar  

Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. MIS Q. 2005;29:461–91.

Craig K, Thatcher JB, Grover V. The IT identity threat: A conceptual definition and operational measure. J Manag Inf Syst. 2019;36:259–88.

Jussupow E, Spohrer K, Heinzl A. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res. 2022;6(3):e28750.

Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Informatics Assoc. 2017;24:1102–10.

Richardson JE, Ash JS. A clinical decision support needs assessment of community-based physicians. J Am Med Informatics Assoc. 2011;18:28–35.

Walter Z, Lopez MS. Physician acceptance of information technologies: Role of perceived threat to professional autonomy. Decis Support Syst. 2008;46:206–15.

Jussupow E, Spohrer K, Heinzl A, Link C. I am; We are - Conceptualizing Professional Identity Threats from Information Technology. 2019.

Karunakaran A. Status-Authority Asymmetry between Professions: The Case of 911 Dispatchers and Police Officers. Adm Sci Q. 2022;67:423–68.

Tripsas M. Technology, identity, and inertia through the lens of “The Digital Photography Company.” Organ Sci. 2009;20:441–60.

Chreim S, Williams BE, Hinings CR. Interlevel influences on the reconstruction of professional role identity. Acad Manag J. 2007;50:1515–39.

Ibarra H. Provisional selves: Experimenting with image and identity in professional adaptation. Adm Sci Q. 1999;44:764–91.

Jussupow E, Spohrer K, Dibbern J, Heinzl A. AI changes who we are-Doesn’t IT? Intelligent decision support and physicians’ professional identity. In: Proceedings of the Twenty-Sixth European Conference on Information Systems, Portsmouth, UK, 2018. pp. 1–11.

Freidson E. The Reorganization of the Medical Profession. Med Care Res Rev. 1985;42:11–35.

Burke PJ, Stets JE. Trust and Commitment through Self-Verification. Soc Psychol Q. 1999;62:347–66.

Mishra AN, Anderson C, Angst CM, Agarwal R. Electronic Health Records Assimilation and Physician Identity Evolution: An Identity Theory Perspective. Inf Syst Res. 2012;23:738–60.

Mirbabaie M, Brünker F, Möllmann Frick NRJ, Stieglitz S. The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electron Mark. 2022;32:73–99.

Klaus T, Blanton JE. User resistance determinants and the psychological contract in enterprise system implementations. Eur J Inf Syst. 2010;19:625–36.

Sambasivan M, Esmaeilzadeh P, Kumar N, Nezakati H. Intention to adopt clinical decision support systems in a developing country: Effect of Physician’s perceived professional autonomy, involvement and belief: A cross-sectional study. BMC Med Inform Decis Mak. 2012;12:1–8.

Elsbach KD. Relating physical environment to self-categorizations: Identity threat and affirmation in a non-territorial office space. Adm Sci Q. 2003;48(4):622–54.

Carter M, Grover V. Me, my self, and I(T). MIS Quart. 2015;39:931–58.

Jensen TB, Aanestad M. Hospitality and hostility in hospitals: A case study of an EPR adoption among surgeons. Eur J Inf Syst. 2007;16:672–80.

Bernardi R, Exworthy M. Clinical managers’ identity at the crossroad of multiple institutional logics in it innovation: The case study of a health care organization in England. Inf Syst J. 2020;30:566–95.

Doolin B. Power and resistance in the implementation of a medical management information system. Inf Syst J. 2004;14:343–62.

Barrett M, Oborn E, Orlikowski WJ, Yates J. Reconfiguring Boundary Relations: Robotic Innovations in Pharmacy Work. Organ Sci. 2012;23:1448–66.

Kellogg KC. Subordinate Activation Tactics: Semi-professionals and Micro-level Institutional Change in Professional Organizations. Adm Sci Q. 2019;64:928–75.

Pratt MG. The good, the bad, and the ambivalent: Managing identification among Amway distributors. Adm Sci Q. 2000;45:456–93.

Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit). Int J Med Inform. 2008;77:386–98.

Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34:285–91.

Abdellatif A, Bouaud J, Lafuente-Lafuente C, Belmin J, Séroussi B. Computerized Decision Support Systems for Nursing Homes: A Scoping Review. J Am Med Dir Assoc. 2021;22:984–94.

Gioia DA, Chittipeddi K. Sensemaking and sensegiving in strategic change initiation. Strateg Manag J. 1991;12:433–48.

Gioia DA, Corley KG, Hamilton AL. Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology. Organ Res Methods. 2012;16:15–31.

Corbin JM, Strauss AL. Basics of qualitative research. Techniques and procedures for developing grounded theory. Los Angeles: Sage; 2015.

O’Connor C, Joffe H. Intercoder Reliability in Qualitative Research: Debates and Practical Guidelines. Int J Qual Methods. 2020;19:1–13.

Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Can J Stat. 1999;27:3–23.

Article   MathSciNet   Google Scholar  

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38:4–23.

Clarivate. 2022 Journal Impact Factor, Journal Citation Reports. 2023.

Damanpour F, Walker RM, Avellaneda CN. Combinative effects of innovation types and organizational performance: A longitudinal study of service organizations. J Manag Stud. 2009;46:650–75.

Ploegmakers KJ, Medlock S, Linn AJ, Lin Y, Seppälä LJ, Petrovic M, et al. Barriers and facilitators in using a Clinical Decision Support System for fall risk management for older people: a European survey. Eur Geriatr Med. 2022;13:395–405.

Laka M, Milazzo A, Merlin T. Factors that impact the adoption of clinical decision support systems (Cdss) for antibiotic management. Int J Environ Res Public Health. 2021;18:1–14.

Masterson Creber RM, Dayan PS, Kuppermann N, Ballard DW, Tzimenatos L, Alessandrini E, et al. Applying the RE-AIM Framework for the Evaluation of a Clinical Decision Support Tool for Pediatric Head Trauma: A Mixed-Methods Study. Appl Clin Inform. 2018;9:693–703.

de Watteville A, Pielmeier U, Graf S, Siegenthaler N, Plockyn B, Andreassen S, et al. Usability study of a new tool for nutritional and glycemic management in adult intensive care: Glucosafe 2. J Clin Monit Comput. 2021;35:525–35.

Feldstein AC, Schneider JL, Unitan R, Perrin NA, Smith DH, Nichols GA, et al. Health care worker perspectives inform optimization of patient panel-support tools: A qualitative study. Popul Health Manag. 2013;16:107–19.

Jansen-Kosterink S, van Velsen L, Cabrita M. Clinician acceptance of complex clinical decision support systems for treatment allocation of patients with chronic low back pain. BMC Med Inform Decis Mak. 2021;21:137.

Russ AL, Zillich AJ, McManus MS, Doebbeling BN, Saleem JJ. Prescribers’ interactions with medication alerts at the point of prescribing: A multi-method, in situ investigation of the human-computer interaction. Int J Med Inform. 2012;81:232–43.

Cresswell K, Callaghan M, Mozaffar H, Sheikh A. NHS Scotland’s Decision Support Platform: A formative qualitative evaluation. BMJ Heal Care Informatics. 2019;26:1–9.

Harry ML, Truitt AR, Saman DM, Henzler-Buckingham HA, Allen CI, Walton KM, et al. Barriers and facilitators to implementing cancer prevention clinical decision support in primary care: a qualitative study. BMC Health Serv Res. 2019;19:534.

Catho G, Centemero NS, Catho H, Ranzani A, Balmelli C, Landelle C, et al. Factors determining the adherence to antimicrobial guidelines and the adoption of computerised decision support systems by physicians: A qualitative study in three European hospitals. Int J Med Inform. 2020;141:104233.

Liu X, Barreto EF, Dong Y, Liu C, Gao X, Tootooni MS, et al. Discrepancy between perceptions and acceptance of clinical decision support Systems: implementation of artificial intelligence for vancomycin dosing. BMC Med Inform Decis Mak. 2023;23:1–9.

Zaidi STR, Marriott JL. Barriers and facilitators to adoption of a web-based antibiotic decision support system. South Med Rev. 2012;5:42–9.

PubMed   PubMed Central   Google Scholar  

Singh D, Spiers S, Beasley BW. Characteristics of CPOE systems and obstacles to implementation that physicians believe will affect adoption. South Med J. 2011;104:418–21.

Agarwal R, Angst CM, DesRoches CM, Fischer MA. Technological viewpoints (frames) about electronic prescribing in physician practices. J Am Med Informatics Assoc. 2010;17:425–31.

Hains IM, Ward RL, Pearson SA. Implementing a web-based oncology protocol system in Australia: Evaluation of the first 3 years of operation. Intern Med J. 2012;42:57–64.

Fossum M, Ehnfors M, Fruhling A, Ehrenberg A. An evaluation of the usability of a computerized decision support system for nursing homes. Appl Clin Inform. 2011;2:420–36.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sukums F, Mensah N, Mpembeni R, Massawe S, Duysburgh E, Williams A, et al. Promising adoption of an electronic clinical decision support system for antenatal and intrapartum care in rural primary healthcare facilities in sub-Saharan Africa: The QUALMAT experience. Int J Med Inform. 2015;84:647–57.

Cracknell AV. Healthcare professionals’ attitudes of implementing a chemotherapy electronic prescribing system: A mixed methods study. J Oncol Pharm Pract. 2020;26:1164–71.

Peute LW, Aarts J, Bakker PJM, Jaspers MWM. Anatomy of a failure: A sociotechnical evaluation of a laboratory physician order entry system implementation. Int J Med Inform. 2010;79:e58-70.

Sedlmayr B, Patapovas A, Kirchner M, Sonst A, Müller F, Pfistermeister B, et al. Comparative evaluation of different medication safety measures for the emergency department: Physicians’ usage and acceptance of training, poster, checklist and computerized decision support. BMC Med Inform Decis Mak 2013;13.

Noormohammad SF, Mamlin BW, Biondich PG, McKown B, Kimaiyo SN, Were MC. Changing course to make clinical decision support work in an HIV clinic in Kenya. Int J Med Inform. 2010;79(3):204–10.

Hsu WWQ, Chan EWY, Zhang ZJ, Lin ZX, Bian ZX, Wong ICK. Chinese medicine students’ views on electronic prescribing: A survey in Hong Kong. Eur J Integr Med. 2015;7:47–54.

Kortteisto T, Komulainen J, Mäkelä M, Kunnamo I, Kaila M. Clinical decision support must be useful, functional is not enough: A qualitative study of computer-based clinical decision support in primary care. BMC Health Serv Res 2012;12.

Koskela T, Sandström S, Mäkinen J, Liira H. User perspectives on an electronic decision-support tool performing comprehensive medication reviews - A focus group study with physicians and nurses. BMC Med Inform Decis Mak. 2016;16:1–9.

Zhai Y, Yu Z, Zhang Q, Qin W, Yang C, Zhang Y. Transition to a new nursing information system embedded with clinical decision support: a mixed-method study using the HOT-fit framework. BMC Med Inform Decis Mak. 2022;22:1–20.

Abidi S, Vallis M, Piccinini-Vallis H, Imran SA, Abidi SSR. Diabetes-related behavior change knowledge transfer to primary care practitioners and patients: Implementation and evaluation of a digital health platform. JMIR Med Informatics. 2018;6:e9629.

Greenberg JK, Otun A, Nasraddin A, Brownson RC, Kuppermann N, Limbrick DD, et al. Electronic clinical decision support for children with minor head trauma and intracranial injuries: a sociotechnical analysis. BMC Med Inform Decis Mak. 2021;21:1–11.

Trafton J, Martins S, Michel M, Lewis E, Wang D, Combs A, et al. Evaluation of the acceptability and usability of a decision support system to encourage safe and effective use of opioid therapy for chronic, noncancer pain by primary care providers. Pain Med. 2010;11:575–85.

Berge GT, Granmo OC, Tveit TO, Munkvold BE, Ruthjersen AL, Sharma J. Machine learning-driven clinical decision support system for concept-based searching: a field trial in a Norwegian hospital. BMC Med Inform Decis Mak. 2023;23:1–15.

Chung P, Scandlyn J, Dayan PS, Mistry RD. Working at the intersection of context, culture, and technology: Provider perspectives on antimicrobial stewardship in the emergency department using electronic health record clinical decision support. Am J Infect Control. 2017;45:1198–202.

Arts DL, Medlock SK, Van Weert HCPM, Wyatt JC, Abu-Hanna A. Acceptance and barriers pertaining to a general practice decision support system for multiple clinical conditions: A mixed methods evaluation. PLoS ONE. 2018;13:e0193187.

Ash JS, Chase D, Baron S, Filios MS, Shiffman RN, Marovich S, et al. Clinical Decision Support for Worker Health: A Five-Site Qualitative Needs Assessment in Primary Care Settings. Appl Clin Inform. 2020;11:635–43.

English D, Ankem K, English K. Acceptance of clinical decision support surveillance technology in the clinical pharmacy. Informatics Heal Soc Care. 2017;42:135–52.

Gezer M, Hunter B, Hocking JS, Manski-Nankervis JA, Goller JL. Informing the design of a digital intervention to support sexually transmissible infection care in general practice: a qualitative study exploring the views of clinicians. Sex Health 2023.

Helldén A, Al-Aieshy F, Bastholm-Rahmner P, Bergman U, Gustafsson LL, Höök H, et al. Development of a computerised decisions support system for renal risk drugs targeting primary healthcare. BMJ Open. 2015;5:1–9.

Hinderer M, Boeker M, Wagner SA, Binder H, Ückert F, Newe S, et al. The experience of physicians in pharmacogenomic clinical decision support within eight German university hospitals. Pharmacogenomics. 2017;18:773–85.

Jeffries M, Salema NE, Laing L, Shamsuddin A, Sheikh A, Avery A, et al. The implementation, use and sustainability of a clinical decision support system for medication optimisation in primary care: A qualitative evaluation. PLoS ONE. 2021;16:e0250946.

Kanagasundaram NS, Bevan MT, Sims AJ, Heed A, Price DA, Sheerin NS. Computerized clinical decision support for the early recognition and management of acute kidney injury: A qualitative evaluation of end-user experience. Clin Kidney J. 2016;9:57–62.

Kastner M, Li J, Lottridge D, Marquez C, Newton D, Straus SE. Development of a prototype clinical decision support tool for osteoporosis disease management: A qualitative study of focus groups. BMC Med Inform Decis Mak. 2010;10:1–15.

Khajouei R, Wierenga PC, Hasman A, Jaspers MW. Clinicians satisfaction with CPOE ease of use and effect on clinicians’ workflow, efficiency and medication safety. Int J Med Inform. 2011;80(5):297–309.

Langton JM, Blanch B, Pesa N, Park JM, Pearson SA. How do medical doctors use a web-based oncology protocol system? A comparison of Australian doctors at different levels of medical training using logfile analysis and an online survey. BMC Med Inform Decis Mak. 2013;13:1–11.

Litvin CB, Ornstein SM, Wessell AM, Nemeth LS, Nietert PJ. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int J Med Inform. 2012;81:521–6.

Pratt R, Saman DM, Allen C, Crabtree B, Ohnsorg K, Sperl-Hillen JAM, et al. Assessing the implementation of a clinical decision support tool in primary care for diabetes prevention: a qualitative interview study using the Consolidated Framework for Implementation Science. BMC Med Inform Decis Mak. 2022;22:1–9.

Robertson J, Moxey AJ, Newby DA, Gillies MB, Williamson M, Pearson SA. Electronic information and clinical decision support for prescribing: State of play in Australian general practice. Fam Pract. 2011;28:93–101.

Rock C, Abosi O, Bleasdale S, Colligan E, Diekema DJ, Dullabh P, et al. Clinical Decision Support Systems to Reduce Unnecessary Clostridioides difficile Testing Across Multiple Hospitals. Clin Infect Dis. 2022;75:1187–93.

Roebroek LO, Bruins J, Delespaul P, Boonstra A, Castelein S. Qualitative analysis of clinicians’ perspectives on the use of a computerized decision aid in the treatment of psychotic disorders. BMC Med Inform Decis Mak. 2020;20(1):1–12.

Salwei ME, Carayon P, Hoonakker PLT, Hundt AS, Wiegmann D, Pulia M, et al. Workflow integration analysis of a human factors-based clinical decision support in the emergency department. Appl Ergon. 2021;97:103498.

Sayood SJ, Botros M, Suda KJ, Foraker R, Durkin MJ. Attitudes toward using clinical decision support in community pharmacies to promote antibiotic stewardship. J Am Pharm Assoc. 2021;61(5):565–71.

Seliaman ME, Albahly MS. The Reasons for Physicians and Pharmacists’ Acceptance of Clinical Support Systems in Saudi Arabia. Int J Environ Res Public Health 2023;20.

Sheehan B, Nigrovic LE, Dayan PS, Kuppermann N, Ballard DW, Alessandrini E, et al. Informing the design of clinical decision support services for evaluation of children with minor blunt head trauma in the emergency department: A sociotechnical analysis. J Biomed Inform. 2013;46:905–13.

Shi Y, Amill-Rosario A, Rudin RS, Fischer SH, Shekelle P, Scanlon DP, et al. Barriers to using clinical decision support in ambulatory care: Do clinics in health systems fare better? J Am Med Informatics Assoc. 2021;28:1667–75.

Snyder ME, Adeoye-Olatunde OA, Gernant SA, DiIulio J, Jaynes HA, Doucette WR, et al. A user-centered evaluation of medication therapy management alerts for community pharmacists: Recommendations to improve usability and usefulness. Res Soc Adm Pharm. 2021;17:1433–43.

Van Biesen W, Van Cauwenberge D, Decruyenaere J, Leune T, Sterckx S. An exploration of expectations and perceptions of practicing physicians on the implementation of computerized clinical decision support systems using a Qsort approach. BMC Med Inform Decis Mak. 2022;22:1–10.

Vandenberg AE, Vaughan CP, Stevens M, Hastings SN, Powers J, Markland A, et al. Improving geriatric prescribing in the ED: a qualitative study of facilitators and barriers to clinical decision support tool use. Int J Qual Heal Care. 2017;29:117–23.

Westerbeek L, de Bruijn GJ, van Weert HC, Abu-Hanna A, Medlock S, van Weert JCM. General Practitioners’ needs and wishes for clinical decision support Systems: A focus group study. Int J Med Inform. 2022;168: 104901.

Cranfield S, Hendy J, Reeves B, Hutchings A, Collin S, Fulop N. Investigating healthcare IT innovations: a “conceptual blending” approach. J Health Organ Manag. 2015;29:1131–48.

Jeon J, Taneva S, Kukreti V, Trbovich P, Easty AC, Rossos PG, Cafazzo JA. Toward successful migration to computerized physician order entry for chemotherapy. Curr Oncol. 2014;21(2):221–8.

Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, et al. The Coming of Age of Artificial Intelligence in Medicine. Artif Intell Med. 2009;46:5–17.

Finley EP, Schneegans S, Tami C, Pugh MJ, McGeary D, Penney L, Sharpe Potter J. Implementing prescription drug monitoring and other clinical decision support for opioid risk mitigation in a military health care setting: a qualitative feasibility study. J Am Med Inform Assoc. 2018;25(5):515–22.

Lugtenberg M, Weenink JW, Van Der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: A focus group study on perceived barriers. BMC Med Inform Decis Mak. 2015;15:1–11.

Chow A, Lye DCB, Arah OA. Psychosocial determinants of physicians’ acceptance of recommendations by antibiotic computerised decision support systems: A mixed methods study. Int J Antimicrob Agents. 2015;45:295–304.

Ford E, Edelman N, Somers L, Shrewsbury D, Lopez Levy M, van Marwijk H, et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak. 2021;21:1–13.

Jung SY, Hwang H, Lee K, Lee HY, Kim E, Kim M, et al. Barriers and facilitators to implementation of medication decision support systems in electronic medical records: Mixed methods approach based on structural equation modeling and qualitative analysis. JMIR Med Informatics. 2020;8:1–14.

Mozaffar H, Cresswell K, Williams R, Bates DW, Sheikh A. Exploring the roots of unintended safety threats associated with the introduction of hospital ePrescribing systems and candidate avoidance and/or mitigation strategies: A qualitative study. BMJ Qual Saf. 2017;26:722–33.

McDermott L, Yardley L, Little P, Ashworth M, Gulliford M. Developing a computer delivered, theory based intervention for guideline implementation in general practice. BMC Fam Pract 2010;11.

Rieckert A, Teichmann AL, Drewelow E, Kriechmayr C, Piccoliori G, Woodham A, et al. Reduction of inappropriate medication in older populations by electronic decision support (the PRIMA-eDS project): A survey of general practitioners’ experiences. J Am Med Informatics Assoc. 2019;26:1323–32.

Anderson JA, Godwin KM, Saleem JJ, Russell S, Robinson JJ, Kimmel B. Accessibility, usability, and usefulness of a Web-based clinical decision support tool to enhance provider-patient communication around Self-management to Prevent (STOP) Stroke. Health Informatics J. 2014;20:261–74.

Carayon P, Cartmill R, Blosky MA, Brown R, Hackenberg M, Hoonakker P, et al. ICU nurses’ acceptance of electronic health records. J Am Med Informatics Assoc. 2011;18:812–9.

Garabedian PM, Gannon MP, Aaron S, Wu E, Burns Z, Samal L. Human-centered design of clinical decision support for management of hypertension with chronic kidney disease. BMC Med Inform Decis Mak. 2022;22:1–12.

Jeffries M, Salema NE, Laing L, Shamsuddin A, Sheikh A, Avery T, Keers RN. Using sociotechnical theory to understand medication safety work in primary care and prescribers’ use of clinical decision support: a qualitative study. BMJ Open. 2023;13(4):e068798.

Lugtenberg M, Pasveer D, van der Weijden T, Westert GP, Kool RB. Exposure to and experiences with a computerized decision support intervention in primary care: results from a process evaluation. BMC Fam Pract. 2015;16(1):1–10.

Mozaffar H, Cresswell KM, Lee L, Williams R, Sheikh A; NIHR ePrescribing Programme Team. Taxonomy of delays in the implementation of hospital computerized physician order entry and clinical decision support systems for prescribing: A longitudinal qualitative study. BMC Med Inform Decis Mak. 2016;16:1–14.

Tabla S, Calafiore M, Legrand B, Descamps A, Andre C, Rochoy M, et al. Artificial Intelligence and Clinical Decision Support Systems or Automated Interpreters: What Characteristics Are Expected by French General Practitioners? Stud Health Technol Inform. 2022;290:887–91.

PubMed   Google Scholar  

Thomas CP, Kim M, McDonald A, Kreiner P, Kelleher SJ, Blackman MB, et al. Prescribers’ expectations and barriers to electronic prescribing of controlled substances. J Am Med Informatics Assoc. 2012;19:375–81.

Yui BH, Jim WT, Chen M, Hsu JM, Liu CY, Lee TT. Evaluation of computerized physician order entry system- A satisfaction survey in Taiwan. J Med Syst. 2012;36:3817–24.

Zha H, Liu K, Tang T, Yin Y-H, Dou B, Jiang L, et al. Acceptance of clinical decision support system to prevent venous thromboembolism among nurses: an extension of the UTAUT model. BMC Med Inform Decis Mak. 2022;22:1–12.

Abramson EL, Patel V, Malhotra S, Pfoh ER, Nena Osorio S, Cheriff A, et al. Physician experiences transitioning between an older versus newer electronic health record for electronic prescribing. Int J Med Inform. 2012;81:539–48.

Cresswell K, Lee L, Mozaffar H, Williams R, Sheikh A, Robertson A, et al. Sustained User Engagement in Health Information Technology: The Long Road from Implementation to System Optimization of Computerized Physician Order Entry and Clinical Decision Support Systems for Prescribing in Hospitals in England. Health Serv Res. 2017;52:1928–57.

Klarenbeek SE, Schuurbiers-Siebers OCJ, van den Heuvel MM, Prokop M, Tummers M. Barriers and facilitators for implementation of a computerized clinical decision support system in lung cancer multidisciplinary team meetings—a qualitative assessment. Biology (Basel). 2021;10:1–15.

Abdel-Qader DH, Cantrill JA, Tully MP. Satisfaction predictors and attitudes towards electronic prescribing systems in three UK hospitals. Pharm World Sci. 2010;32:581–93.

Ballard DW, Rauchwerger AS, Reed ME, Vinson DR, Mark DG, Offerman SR, et al. Emergency physicians’ knowledge and attitudes of clinical decision support in the electronic health record: A survey-based study. Acad Emerg Med. 2013;20:352–60.

Buenestado D, Elorz J, Pérez-Yarza EG, Iruetaguena A, Segundo U, Barrena R, et al. Evaluating acceptance and user experience of a guideline-based clinical decision support system execution platform. J Med Syst. 2013;37:1–9.

Huguet N, Ezekiel-Herrera D, Gunn R, Pierce A, O’Malley J, Jones M, Gold R. Uptake of a Cervical Cancer Clinical Decision Support Tool: A Mixed-Methods Study. Appl Clin Inform. 2023;14(03):594–9.

Paulsen MM, Varsi C, Paur I, Tangvik RJ, Andersen LF. Barriers and facilitators for implementing a decision support system to prevent and treat disease-related malnutrition in a hospital setting: Qualitative study. JMIR Form Res 2019;3.

Varsi C, Andersen LF, Koksvik GT, Severinsen F, Paulsen MM. Intervention-related, contextual and personal factors affecting the implementation of an evidence-based digital system for prevention and treatment of malnutrition in elderly institutionalized patients: a qualitative study. BMC Health Serv Res. 2023;23:1–12.

Zhai Y, Yu Z, Zhang Q, Zhang YX. Barriers and facilitators to implementing a nursing clinical decision support system in a tertiary hospital setting: A qualitative study using the FITT framework. Int J Med Inform. 2022;166:104841.

Carland JE, Elhage T, Baysari MT, Stocker SL, Marriott DJE, Taylor N, et al. Would they trust it? An exploration of psychosocial and environmental factors affecting prescriber acceptance of computerised dose-recommendation software. Br J Clin Pharmacol. 2021;87:1215–33.

Wannheden C, Hvitfeldt-Forsberg H, Eftimovska E, Westling K, Ellenius J. Boosting Quality Registries with Clinical Decision Support Functionality. Methods Inf Med. 2017;56:339–43.

Cranfield S, Hendy J, Reeves B, Hutchings A, Collin S, Fulop N. Investigating healthcare IT innovations: a “conceptual blending” approach. J Health Organ Manag. 2015;29(7):1131–48.

Hsiao JL, Wu WC, Chen RF. Factors of accepting pain management decision support systems by nurse anesthetists. BMC Med Inform Decis Mak 2013;13.

Jeng DJF, Tzeng GH. Social influence on the use of Clinical Decision Support Systems: Revisiting the Unified Theory of Acceptance and Use of Technology by the fuzzy DEMATEL technique. Comput Ind Eng. 2012;62:819–28.

Liu Y, Hao H, Sharma MM, Harris Y, Scofi J, Trepp R, et al. Clinician Acceptance of Order Sets for Pain Management: A Survey in Two Urban Hospitals. Appl Clin Inform. 2022;13:447–55.

Zakane SA, Gustafsson LL, Tomson G, Loukanova S, Sié A, Nasiell J, et al. Guidelines for maternal and neonatal “point of care”: Needs of and attitudes towards a computerized clinical decision support system in rural Burkina Faso. Int J Med Inform. 2014;83:459–69.

Mertz E, Bolarinwa O, Wides C, Gregorich S, Simmons K, Vaderhobli R, et al. Provider Attitudes Toward the Implementation of Clinical Decision Support Tools in Dental Practice. J Evid Based Dent Pract. 2015;15:152–63.

De Vries AE, Van Der Wal MHL, Nieuwenhuis MMW, De Jong RM, Van Dijk RB, Jaarsma T, et al. Perceived barriers of heart failure nurses and cardiologists in using clinical decision support systems in the treatment of heart failure patients. BMC Med Inform Decis Mak 2013;13.

Ahmad N, Du S, Ahmed F, ul Amin N, Yi X. Healthcare professionals satisfaction and AI-based clinical decision support system in public sector hospitals during health crises: a cross-sectional study. Inf Technol Manag. 2023:1–13.

Van Cauwenberge D, Van Biesen W, Decruyenaere J, Leune T, Sterckx S. “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med Ethics. 2022;23:1–14.

Wijnhoven F. Organizational Learning for Intelligence Amplification Adoption: Lessons from a Clinical Decision Support System Adoption Project. Inf Syst Front. 2022;24:731–44.

Sittig DF, Wright A, Simonaitis L, Carpenter JD, Allen GO, Doebbeling BN, et al. The state of the art in clinical knowledge management: An inventory of tools and techniques. Int J Med Inform. 2010;79:44–57.

Simon SR, Keohane CA, Amato M, Coffey M, Cadet B, Zimlichman E, et al. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: A qualitative study. BMC Med Inform Decis Mak 2013;13.

Hor CP, O’Donnell JM, Murphy AW, O’Brien T, Kropmans TJB. General practitioners’ attitudes and preparedness towards Clinical Decision Support in e-Prescribing (CDS-eP) adoption in the West of Ireland: a cross sectional study. BMC Med Inform Decis Mak. 2010;10:2.

Abejirinde IOO, Zweekhorst M, Bardají A, Abugnaba-Abanga R, Apentibadek N, De Brouwere V, et al. Unveiling the black box of diagnostic and clinical decision support systems for antenatal care: Realist evaluation. JMIR MHealth UHealth. 2018;6:e11468.

Charani E, Kyratsis Y, Lawson W, Wickens H, Brannigan ET, Moore LSP, et al. An analysis of the development and implementation of a smartphone application for the delivery of antimicrobial prescribing policy: Lessons learnt. J Antimicrob Chemother. 2013;68:960–7.

Patel R, Green W, Shahzad MW, Larkin C. Use of mobile clinical decision support software by junior doctors at a UK Teaching Hospital: Identification and evaluation of barriers to engagement. JMIR Mhealth Uhealth. 2015;3(3):e4388.

Hsiao JL, Chen RF. Critical factors influencing physicians’ intention to use computerized clinical practice guidelines: An integrative model of activity theory and the technology acceptance model. BMC Med Inform Decis Mak. 2015;16:1–15.

Khan S, McCullagh L, Press A, Kharche M, Schachter A, Pardo S, et al. Formative assessment and design of a complex clinical decision support tool for pulmonary embolism. Evid Based Med. 2016;21:7–13.

Randell R, Dowding D. Organisational influences on nurses’ use of clinical decision support systems. Int J Med Inform. 2010;79:412–21.

Frisinger A, Papachristou P. The voice of healthcare: introducing digital decision support systems into clinical practice-a qualitative study. BMC Prim Care. 2023;24(1):67.

Ifinedo P. Using an Extended Theory of Planned Behavior to Study Nurses’ Adoption of Healthcare Information Systems in Nova Scotia. Int J Technol Diffus. 2017;8:1–17.

Maslej MM, Kloiber S, Ghassemi M, Yu J, Hill SL. Out with AI, in with the psychiatrist: a preference for human-derived clinical decision support in depression care. Transl Psychiatry. 2023;13:1–9.

Jeffries M, Keers RN, Phipps DL, Williams R, Brown B, Avery AJ, et al. Developing a learning health system: Insights from a qualitative process evaluation of a pharmacist-led electronic audit and feedback intervention to improve medication safety in primary care. PLoS ONE. 2018;13:1–16.

Malo C, Neveu X, Archambault PM, Émond M, Gagnon MP. Exploring nurses’ intention to use a computerized platform in the resuscitation unit: Development and validation of a questionnaire based on the theory of planned behavior. J Med Internet Res 2012;14.

Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: A qualitative study of paramedic views and experience using Strong Structuration Theory. Implement Sci. 2018;13:1–10.

Teferi GH, Wonde TE, Tadele MM, Assaye BT, Hordofa ZR, Ahmed MH, et al. Perception of physicians towards electronic prescription system and associated factors at resource limited setting 2021: Cross sectional study. PLoS ONE. 2022;17:1–11.

Wrzosek N, Zimmermann A, Balwicki Ł. Doctors’ perceptions of e-prescribing upon its mandatory adoption in poland, using the unified theory of acceptance and use of technology method. Healthc 2020;8.

O’Sullivan D, Doyle J, Michalowski W, Wilk S, Thomas R, Farion K. Expanding usability analysis with intrinsic motivation concepts to learn about CDSS adoption: A case study. Health Policy Technol. 2014;3(2):113–25.

Wickström H, Tuvesson H, Öien R, Midlöv P, Fagerström C. Health Care Staff’s Experiences of Engagement When Introducing a Digital Decision Support System for Wound Management: Qualitative Study. JMIR Hum Factors. 2020;7:1–10.

Overby CL, Erwin AL, Abul-Husn NS, Ellis SB, Scott SA, Obeng AO, et al. Physician attitudes toward adopting genome-guided prescribing through clinical decision support. J Pers Med. 2014;4:35–49.

Elnahal SM, Joynt KE, Bristol SJ, Jha AK. Electronic health record functions differ between best and worst hospitals. Am J Manag Care. 2011;17(4):e121.

Grout RW, Cheng ER, Carroll AE, Bauer NS, Downs SM. A six-year repeated evaluation of computerized clinical decision support system user acceptability. Int J Med Inform. 2018;112:74–81.

Sicotte C, Taylor L, Tamblyn R. Predicting the use of electronic prescribing among early adopters in primary care Recherche Prédire le taux d ’ utilisation de la prescription électronique chez ceux qui viennent de l ’ adopter dans un contexte de soins primaires 2013;59.

Pevnick JM, Asch SM, Adams JL, Mattke S, Patel MH, Ettner SL, et al. Adoption and use of stand-alone electronic prescribing in a health plan-sponsored initiative. Am J Manag Care. 2010;16:182–9.

Meulendijk M, Spruit M, Drenth-Van Maanen C, Numans M, Brinkkemper S, Jansen P. General practitioners’ attitudes towards decision-supported prescribing: An analysis of the Dutch primary care sector. Health Informatics J. 2013;19:247–63.

B S, A P, M K, A S, F M, B P-KP, et al. Comparative evaluation of different medication safety measures for the emergency department: physicians’ usage and acceptance of training, poster, checklist and computerized decision support. BMC Med Inform Decis Mak. 2013;13:79.

Holden RJ, Karsh BT. The Technology Acceptance Model: Its past and its future in health care. J Biomed Inform. 2010;43:159–72.

Edmondson AC. Speaking up in the operating room: How team leaders promote learning in interdisciplinary action teams. J Manag Stud. 2003;40:1419–52.

Heinze KL, Heinze JE. Individual innovation adoption and the role of organizational culture. Rev Manag Sci. 2020;14:561–86.

Nembhard IM, Edmondson AC. Making it safe: The effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav Int J Ind Occup Organ Psychol Behav. 2006;27:941–66.

Singer SJ, Hayes JE, Gray GC, Kiang MV. Making time for learning-oriented leadership in multidisciplinary hospital management groups. Health Care Manage Rev. 2015;40:300–12.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38:65–76.

Liang H, Xue Y, Ke W, Wei KK. Understanding the influence of team climate on it use. J Assoc Inf Syst. 2010;11:414–32.

Holden RJ, Brown RL, Scanlon MC, Karsh BT. Modeling nurses’ acceptance of bar coded medication administration technology at a pediatric hospital. J Am Med Informatics Assoc. 2012;19:1050–8.

Liu C, Zhu Q, Holroyd KA, Seng EK. Status and trends of mobile-health applications for iOS devices: a developer’s perspective. J Syst Softw. 2011;84:2022–33.

Currie G, Lockett A, Finn R, Martin G, Waring J. Institutional Work to Maintain Professional Power: Recreating the Model of Medical Professionalism. Organ Stud. 2012;33:937–62.

DiBenigno J, Kellogg KC. Beyond Occupational Differences: The Importance of Cross-cutting demographics and dyadic toolkits for collaboration in a US hospital. Adm Sci Q. 2014;59(3):375–408.

Curran GM, Landes SJ, Arrossi S, Paolino M, Orellana L, Thouyaret L, et al. Mixed-methods approach to evaluate an mHealth intervention to increase adherence to triage of human papillomavirus-positive women who have performed self-collection (the ATICA study): Study protocol for a hybrid type i cluster randomized effectiveness-imp. Trials. 2004;20:1–12.

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: Recommendations for specifying and reporting. Implement Sci. 2013;8:1–11.

Baumeister RF, Leary MR. Writing narrative literature reviews. Rev Gen Psychol. 1997;1:311–20.

Download references

Open Access funding enabled and organized by Projekt DEAL. Parts of the study are supported by the research grand by the German Bundesministerium für Bildung und Forschung (BMBF) Augmented Auditive Intelligence (A2I). Reference: 16SV8599.

Author information

Authors and affiliations.

Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany

Sophia Ackerhans, Thomas Huynh, Carsten Kaiser & Carsten Schultz

You can also search for this author in PubMed   Google Scholar

Contributions

SA conceived the study, developed the literature search, screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and conceptualized and wrote the sections of the manuscript. TH conceived the study, developed the literature search, screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and edited the sections of the manuscript. CK screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and revised the manuscript. CS planned and coordinated the study and edited the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sophia Ackerhans .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: table s1..

Final search strings used to identify articles for the review. Table S2. Characteristics of included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ackerhans, S., Huynh, T., Kaiser, C. et al. Exploring the role of professional identity in the implementation of clinical decision support systems—a narrative review. Implementation Sci 19 , 11 (2024). https://doi.org/10.1186/s13012-024-01339-x

Download citation

Received : 08 August 2023

Accepted : 09 January 2024

Published : 12 February 2024

DOI : https://doi.org/10.1186/s13012-024-01339-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Professional identity
  • Identity threat
  • Health care
  • Implementation

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

review of related literature in quantitative research example

IMAGES

  1. type of review of related literature employed in a quantitative research

    review of related literature in quantitative research example

  2. 😍 How to critique a quantitative research article examples. Nursing research article critique

    review of related literature in quantitative research example

  3. Qualitative Research

    review of related literature in quantitative research example

  4. PPT

    review of related literature in quantitative research example

  5. Literature Review Sample International Relations

    review of related literature in quantitative research example

  6. Literature Review

    review of related literature in quantitative research example

VIDEO

  1. Systematic Literature Review and Meta Analysis(literature review)(quantitative analysis)

  2. How to Do a Good Literature Review for Research Paper and Thesis

  3. #1 Research Progress & Literature Search 4/10 . 30th August 2020 2/2 . #AE-RM3-201

  4. SAMPLE LITERATURE REVIEW AND STUDIES

  5. Review of related Literature in Action Research

  6. Choosing A Research Topic

COMMENTS

  1. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  2. Quantitative Research: Literature Review

    Step 1 Step 2: Search Boolean Search Strategies Search Limiters ★ EBSCO & Google Drive Getting Started: Review the tabs in this section to explore how Librarians and Archer Library resources can help support you and your research during the literature review process. 1. Select a Topic "All research begins with curiosity" (Machi & McEvoy, 2009, p.

  3. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  4. Review of Related Literature: Format, Example, & How to Make RRL

    Here are some examples of how a review might be beneficial: It helps determine knowledge gaps. It saves from duplicating research that has already been conducted. It provides an overview of various research areas within the discipline. It demonstrates the researcher's familiarity with the topic.

  5. How to write review of related literature (RRL) in research

    Academic Writing R Discovery How to write review of related literature (RRL) in research September 16, 2022 Jennifer Ulz Photo by cottonbro from Pexels A review of related literature (a.k.a RRL in research) is a comprehensive review of the existing literature pertaining to a specific topic or research question.

  6. Literature Review

    Share Watch on What is a literature review? A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis.

  7. PDF Literature Review: An Overview

    The Review of related literature involves the systematic identification, location, and analysis of documents containing information related to the research problem. The term is also used to describe the written component of a research plan or report that discusses the reviewed documents.

  8. Reviewing the research methods literature: principles and strategies

    For example, in the overview of sampling in qualitative research , achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

  9. Writing a useful literature review for a quantitative research project

    For example, this review is presented with appropriate citations (necessary details when writing for a specific audience) to support the arguments. Completed Research Report Details After the study is completed, the findings need to be presented.

  10. Quantitative Methods for Literature Reviews

    A revolution in the science of emotion has emerged in recent decades, with the potential to create a paradigm shift in decision theories. The research reveals that emotions constitute potent, pervasive, predictable, sometimes harmful and sometimes ...Read More. Full Text HTML; Download PDF; Supplemental Materials. Supplemental text Read More

  11. Guidance on Conducting a Systematic Literature Review

    This article is organized as follows: The next section presents the methodology adopted by this research, followed by a section that discusses the typology of literature reviews and provides empirical examples; the subsequent section summarizes the process of literature review; and the last section concludes the paper with suggestions on how to improve the quality and rigor of literature ...

  12. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  13. How to Make a Literature Review in Research (RRL Example)

    Aug 5, 2023 905,998 How to Make a Literature Review in Research (RRL Example) Wordvice KH How to Write a Literature Review Watch on What is an RRL in a research paper? A relevant review of the literature (RRL) is an objective, concise, critical summary of published research literature relevant to a topic being researched in an article.

  14. Writing a Literature Review

    For example: Qualitative versus quantitative research; Empirical versus theoretical scholarship; Divide the research by sociological, historical, or cultural sources; Theoretical: In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and ...

  15. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  16. Literature Review: Conducting & Writing

    Sample Lit Reviews from Communication Arts Literature Review Sample 1 Literature Review Sample 2 Literature Review Sample 3 Have an exemplary literature review? Have you written a stellar literature review you care to share for teaching purposes?

  17. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  18. Q: How do I do a review of related literature (RRL)?

    A review of related literature (RRL) is a detailed review of existing literature related to the topic of a thesis or dissertation. In an RRL, you talk about knowledge and findings from existing literature relevant to your topic.

  19. (PDF) CHAPTER 2 REVIEW OF RELATED LITERATURE

    Johndelle A Potutan. Christian Sebial. Sheryl Lynn Maglinte Quilab. View. Richard D. Broene. Ralph Tom Baker. PDF | Review | Find, read and cite all the research you need on ResearchGate.

  20. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. ... The authors used existing literature to create a novel framework that filled a gap in current research and practice related to the training of graduate teaching ...

  21. Systematic Review

    A review is an overview of the research that's already been completed on a topic. What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias. The methods are repeatable, and the approach is formal and systematic: Formulate a research question. Develop a protocol.

  22. How to Operate Literature Review Through Qualitative and Quantitative

    Scientometrics uses quantitative research methods to analyse the development of science as an informational process , ... a research field, or thematic areas related to specific research questions. ... As for the literature review, for example, the systematic review involves at least two researchers. ...

  23. Types of reviews

    An overview of conducting literature reviews in the social sciences and STEM fields. Skip to Main Content ... Types of reviews and examples; Choosing a review type; 1. Define your research question; 2. Plan your search; 3. Search the literature; 4. ... International Journal of Environmental Research and Public Health, 12(4), 4354-4379. https ...

  24. (PDF) Review of related literature

    In fact, review of related literature is required in every chapter of the thesis. It helps in defining your problem, identifying variables, framing objectives and hypotheses, linking it with ...

  25. Archival research on sustainability‐related executive compensation. A

    This literature review summarizes previous quantitative archival research on sustainability-related executive compensation (SREC) as the overarching research method in this field. Based on stakeholder agency theory, we included 66 peer-reviewed studies on the determinants (governance, financial, and sustainability drivers) and consequences of ...

  26. Exploring the role of professional identity in the implementation of

    The studies included in the review were then subject to a qualitative content analysis procedure [60, 61] using MAXQDA, version 2020.For data analysis, we initially followed the principle of "open coding" [].We divided the studies equally among the three authors, and through an initial, first-order exploratory analysis, we identified numerous codes, which were labeled with key terms from ...