University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard
  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: Jan 26, 2024 4:35 PM
  • URL: https://lib.guides.umd.edu/SR

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

How should academia deal with AI writing platforms? Free webinar

AI is transforming academia. In collaboration with QuillBot, we’ll explore how appropriate use of AI can help you achieve higher levels of success.

  • The AI revolution for academic success
  • Learn with industry experts and ask your questions
  • Using AI to enhance writing, not replace it

Sign up for this session

February 29th, 10AM CST

systematic literature review guide

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved February 26, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews

Systematic Reviews: Home

Created by health science librarians.

HSL Logo

  • Systematic review resources

What is a Systematic Review?

A simplified process map, how can the library help, systematic reviews in non-health disciplines, resources for performing systematic reviews.

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

  Chat with us (during business hours)

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Sign up for a systematic review workshop or watch a recording

There are many types of literature reviews.

Before beginning a systematic review, consider whether it is the best type of review for your question, goals, and resources. The table below compares a few different types of reviews to help you decide which is best for you. 

  • Scoping Review Guide For more information about scoping reviews, refer to the UNC HSL Scoping Review Guide.

Systematic Reviews: A Simplified, Step-by-Step Process  Step 1: Pre-Review. Common tasks include formulating a team, developing research question(s), and scoping literature for published systematic reviews on the topic. Librarians can provide substantial support for Step 1.  Step 2: Develop Protocol. Common tasks include determining eligibility criteria, selecting quality assessment tools and items for data extraction, writing the protocol, and making the protocol accessible via a website or registry.  Step 3: Conduct Literature Searches. Common tasks include partnering with a librarian, searching multiple databases, performing other searching methods like hand searching, and locating grey literature or other unpublished research. Librarians can provide substantial support for Step 3.  Step 4: Manage Citations. Common tasks include exporting citations to a citation manager such as Endnote, preparing a PRISMA flow-chart with numbers of citations for steps, updating as necessary, and de-duplicating citations and uploading them to a screening tool such as Covidence. Librarians can provide substantial support for Step 4.   Step 5: Screen Citations. Common tasks include screening the titles and abstracts of citations using inclusion criteria with at least two reviewers and locating full-text and screen citations that meet the inclusion criteria with at least two reviewers.  UNC Health Sciences Librarians (HSL) Librarians can provide support with using AI or other automation approaches to reduce the volume of literature that must be screened manually. Reach out to HSL for more information.  Step 6: Conduct Quality Assessment. Common tasks include performing quality assessments, like a critical appraisal, of the included studies.  Step 7: Complete Data Extraction. Common tasks include extracting data from included studies and creating tables of studies for the manuscript.  Step 8: Write Review. Common tasks include consulting the PRISMA checklist or other reporting standard, writing the manuscript, and organizing supplementary materials. Librarians can provide substantial support for Step 8.

  • UNC HSL's Simplified, Step-by-Step Process Map A PDF file of the HSL's Systematic Review Process Map.

The average systematic review takes 1,168 hours to complete. ¹   A librarian can help you speed up the process.

Systematic reviews follow established guidelines and best practices to produce high-quality research. Librarian involvement in systematic reviews is based on two levels. In Tier 1, the librarian will collaborate with researchers in a consultative manner. In Tier 2, the librarian will be an active member of your research team and co-author on your review. Roles and expectations of librarians vary based on the level of involvement desired. Examples of these differences are outlined in the table below.

  • Request a systematic or scoping review consultation

Researchers are conducting systematic reviews in a variety of disciplines.  If your focus is on a topic other than health sciences, you may want to also consult the resources below to learn how systematic reviews may vary in your field.  You can also contact a librarian for your discipline with questions.

  • EPPI-Centre methods for conducting systematic reviews The EPPI-Centre develops methods and tools for conducting systematic reviews, including reviews for education, public and social policy.

Cover Art

Environmental Topics

  • Collaboration for Environmental Evidence (CEE) CEE seeks to promote and deliver evidence syntheses on issues of greatest concern to environmental policy and practice as a public service

Social Sciences

systematic literature review guide

  • Siddaway AP, Wood AM, Hedges LV. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annu Rev Psychol. 2019 Jan 4;70:747-770. doi: 10.1146/annurev-psych-010418-102803. A resource for psychology systematic reviews, which also covers qualitative meta-syntheses or met-ethnographies
  • The Campbell Collaboration

Social Work

Cover Art

Software engineering

  • Guidelines for Performing Systematic Literature Reviews in Software Engineering The objective of this report is to propose comprehensive guidelines for systematic literature reviews appropriate for software engineering researchers, including PhD students.

Cover Art

Sport, Exercise, & Nutrition

Cover Art

  • Application of systematic review methodology to the field of nutrition by Tufts Evidence-based Practice Center Publication Date: 2009
  • Systematic Reviews and Meta-Analysis — Open & Free (Open Learning Initiative) The course follows guidelines and standards developed by the Campbell Collaboration, based on empirical evidence about how to produce the most comprehensive and accurate reviews of research

Cover Art

  • Systematic Reviews by David Gough, Sandy Oliver & James Thomas Publication Date: 2020

Cover Art

Updating reviews

  • Updating systematic reviews by University of Ottawa Evidence-based Practice Center Publication Date: 2007

Looking for our previous Systematic Review guide?

Our legacy guide was used June 2020 to August 2022

  • Systematic Review Legacy Guide
  • Next: Step 1: Complete Pre-Review Tasks >>
  • Last Updated: Feb 8, 2024 9:22 AM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053

Reference management. Clean and simple.

How to write a systematic literature review [9 steps]

Systematic literature review

What is a systematic literature review?

Where are systematic literature reviews used, what types of systematic literature reviews are there, how to write a systematic literature review, 1. decide on your team, 2. formulate your question, 3. plan your research protocol, 4. search for the literature, 5. screen the literature, 6. assess the quality of the studies, 7. extract the data, 8. analyze the results, 9. interpret and present the results, registering your systematic literature review, frequently asked questions about writing a systematic literature review, related articles.

A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question.

Put simply, a systematic review is a study of studies that is popular in medical and healthcare research. In this guide, we will cover:

  • the definition of a systematic literature review
  • the purpose of a systematic literature review
  • the different types of systematic reviews
  • how to write a systematic literature review

➡️ Visit our guide to the best research databases for medicine and health to find resources for your systematic review.

Systematic literature reviews can be utilized in various contexts, but they’re often relied on in clinical or healthcare settings.

Medical professionals read systematic literature reviews to stay up-to-date in their field, and granting agencies sometimes need them to make sure there’s justification for further research in an area. They can even be used as the starting point for developing clinical practice guidelines.

A classic systematic literature review can take different approaches:

  • Effectiveness reviews assess the extent to which a medical intervention or therapy achieves its intended effect. They’re the most common type of systematic literature review.
  • Diagnostic test accuracy reviews produce a summary of diagnostic test performance so that their accuracy can be determined before use by healthcare professionals.
  • Experiential (qualitative) reviews analyze human experiences in a cultural or social context. They can be used to assess the effectiveness of an intervention from a person-centric perspective.
  • Costs/economics evaluation reviews look at the cost implications of an intervention or procedure, to assess the resources needed to implement it.
  • Etiology/risk reviews usually try to determine to what degree a relationship exists between an exposure and a health outcome. This can be used to better inform healthcare planning and resource allocation.
  • Psychometric reviews assess the quality of health measurement tools so that the best instrument can be selected for use.
  • Prevalence/incidence reviews measure both the proportion of a population who have a disease, and how often the disease occurs.
  • Prognostic reviews examine the course of a disease and its potential outcomes.
  • Expert opinion/policy reviews are based around expert narrative or policy. They’re often used to complement, or in the absence of, quantitative data.
  • Methodology systematic reviews can be carried out to analyze any methodological issues in the design, conduct, or review of research studies.

Writing a systematic literature review can feel like an overwhelming undertaking. After all, they can often take 6 to 18 months to complete. Below we’ve prepared a step-by-step guide on how to write a systematic literature review.

  • Decide on your team.
  • Formulate your question.
  • Plan your research protocol.
  • Search for the literature.
  • Screen the literature.
  • Assess the quality of the studies.
  • Extract the data.
  • Analyze the results.
  • Interpret and present the results.

When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

You may also need to team up with a librarian to help with the search, literature screeners, a statistician to analyze the data, and the relevant subject experts.

Define your answerable question. Then ask yourself, “has someone written a systematic literature review on my question already?” If so, yours may not be needed. A librarian can help you answer this.

You should formulate a “well-built clinical question.” This is the process of generating a good search question. To do this, run through PICO:

  • Patient or Population or Problem/Disease : who or what is the question about? Are there factors about them (e.g. age, race) that could be relevant to the question you’re trying to answer?
  • Intervention : which main intervention or treatment are you considering for assessment?
  • Comparison(s) or Control : is there an alternative intervention or treatment you’re considering? Your systematic literature review doesn’t have to contain a comparison, but you’ll want to stipulate at this stage, either way.
  • Outcome(s) : what are you trying to measure or achieve? What’s the wider goal for the work you’ll be doing?

Now you need a detailed strategy for how you’re going to search for and evaluate the studies relating to your question.

The protocol for your systematic literature review should include:

  • the objectives of your project
  • the specific methods and processes that you’ll use
  • the eligibility criteria of the individual studies
  • how you plan to extract data from individual studies
  • which analyses you’re going to carry out

For a full guide on how to systematically develop your protocol, take a look at the PRISMA checklist . PRISMA has been designed primarily to improve the reporting of systematic literature reviews and meta-analyses.

When writing a systematic literature review, your goal is to find all of the relevant studies relating to your question, so you need to search thoroughly .

This is where your librarian will come in handy again. They should be able to help you formulate a detailed search strategy, and point you to all of the best databases for your topic.

➡️ Read more on on how to efficiently search research databases .

The places to consider in your search are electronic scientific databases (the most popular are PubMed , MEDLINE , and Embase ), controlled clinical trial registers, non-English literature, raw data from published trials, references listed in primary sources, and unpublished sources known to experts in the field.

➡️ Take a look at our list of the top academic research databases .

Tip: Don’t miss out on “gray literature.” You’ll improve the reliability of your findings by including it.

Don’t miss out on “gray literature” sources: those sources outside of the usual academic publishing environment. They include:

  • non-peer-reviewed journals
  • pharmaceutical industry files
  • conference proceedings
  • pharmaceutical company websites
  • internal reports

Gray literature sources are more likely to contain negative conclusions, so you’ll improve the reliability of your findings by including it. You should document details such as:

  • The databases you search and which years they cover
  • The dates you first run the searches, and when they’re updated
  • Which strategies you use, including search terms
  • The numbers of results obtained

➡️ Read more about gray literature .

This should be performed by your two reviewers, using the criteria documented in your research protocol. The screening is done in two phases:

  • Pre-screening of all titles and abstracts, and selecting those appropriate
  • Screening of the full-text articles of the selected studies

Make sure reviewers keep a log of which studies they exclude, with reasons why.

➡️ Visit our guide on what is an abstract?

Your reviewers should evaluate the methodological quality of your chosen full-text articles. Make an assessment checklist that closely aligns with your research protocol, including a consistent scoring system, calculations of the quality of each study, and sensitivity analysis.

The kinds of questions you'll come up with are:

  • Were the participants really randomly allocated to their groups?
  • Were the groups similar in terms of prognostic factors?
  • Could the conclusions of the study have been influenced by bias?

Every step of the data extraction must be documented for transparency and replicability. Create a data extraction form and set your reviewers to work extracting data from the qualified studies.

Here’s a free detailed template for recording data extraction, from Dalhousie University. It should be adapted to your specific question.

Establish a standard measure of outcome which can be applied to each study on the basis of its effect size.

Measures of outcome for studies with:

  • Binary outcomes (e.g. cured/not cured) are odds ratio and risk ratio
  • Continuous outcomes (e.g. blood pressure) are means, difference in means, and standardized difference in means
  • Survival or time-to-event data are hazard ratios

Design a table and populate it with your data results. Draw this out into a forest plot , which provides a simple visual representation of variation between the studies.

Then analyze the data for issues. These can include heterogeneity, which is when studies’ lines within the forest plot don’t overlap with any other studies. Again, record any excluded studies here for reference.

Consider different factors when interpreting your results. These include limitations, strength of evidence, biases, applicability, economic effects, and implications for future practice or research.

Apply appropriate grading of your evidence and consider the strength of your recommendations.

It’s best to formulate a detailed plan for how you’ll present your systematic review results. Take a look at these guidelines for interpreting results from the Cochrane Institute.

Before writing your systematic literature review, you can register it with OSF for additional guidance along the way. You could also register your completed work with PROSPERO .

Systematic literature reviews are often found in clinical or healthcare settings. Medical professionals read systematic literature reviews to stay up-to-date in their field and granting agencies sometimes need them to make sure there’s justification for further research in an area.

The first stage in carrying out a systematic literature review is to put together your team. You should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

Your systematic review should include the following details:

A literature review simply provides a summary of the literature available on a topic. A systematic review, on the other hand, is more than just a summary. It also includes an analysis and evaluation of existing research. Put simply, it's a study of studies.

The final stage of conducting a systematic literature review is interpreting and presenting the results. It’s best to formulate a detailed plan for how you’ll present your systematic review results, guidelines can be found for example from the Cochrane institute .

systematic literature review guide

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters & Hedges
  • Grey Literature
  • Managing & Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Feb 27, 2024 8:43 AM
  • URL: https://guides.library.ucla.edu/systematicreviews

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis

Purdue University

  • Ask a Librarian

Systematic Reviews

  • Choosing a Review
  • Getting Started
  • Further Resources
  • Sys Review Request Form

What is a Systematic Review?

A Systematic Review is a very specialized and in-depth literature review.  It is a research method in its own right, where information from research studies is aggregated, analyzed,  and synthesized in an unbiased and reproducible manner to answer a research question. The SR can provide evidence for practice and policy-making as well as gaps in research. In the medical field, where systematic reviews are most common, there are a variety of standard protocols for conducting a SR.

Generally, the systematic review looks at a very specific question.  For example, how effective a particular medical treatment is for a specific population with a stipulated ailment.  How effective a teaching method is for a certain topic in a particular setting.

Systematic reviews are very time intensive and typically require a multi-person research team.  Thus, it is important for you to determine whether a systematic review is right for you.

SR Process Diagram

Systematic Review Diagram

Librarians can...

Generally speaking, our SR consultations entail 1 or 2 hour-long sessions. We ask that you prepare by filling out our request form .

There are three levels of consultation support we provide. The first two levels are part of our standard service, but assistance at the third level is available only as our capacity allows, and at the discretion of the librarian . Co-authorship acknowledgement is expected for third-level assistance.   

First Level/Session: Guidance on the feasibility of a systematic review of the topic.

  • Provide guidance on choosing a review type
  • Advise on constructing a research question
  • Identify relevant databases for you to search
  • Consult on initial search strategies to improve your results
  • Make suggestions on reference management tools
  • Locate existing SR's that could be models for your work

Second Level/Session:  Follow-up on initial work

  • Feedback on your initial searches, database selection, including appropriateness of terms, strategies, completeness, and accuracy.

Third Level Support:  If you need further support, you may request the librarian participate as a co-author on the project.  These co-author level activities include

  • Contributing to completing the SR process flow chart for the project
  • Designing and refining in-depth search strategies for the team
  • Train the team on using a screening platformlike Covidence or Rayyan
  • Running searches in appropriate databases
  • Continually updating search results using alerts
  • Writing Methods section of SR paper
  • Reviewing final manuscript
  • Next: Choosing a Review >>
  • Last Edited: Dec 4, 2023 1:53 PM
  • URL: https://guides.lib.purdue.edu/sys-review
  • University of Michigan Library
  • Research Guides

Systematic Reviews

What is a systematic review, systematic review and meta-analysis process.

  • Work with a Search Expert
  • Covidence Review Software
  • Types of Reviews
  • Evidence in a Systematic Review
  • Information Sources
  • Search Strategy
  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment
  • Reporting Results
  • For Search Professionals

Library Contact

Profile Photo

A systematic review is a comprehensive literature search and synthesis project that tries to answer a well-defined question using existing primary research as evidence. A protocol is used to plan the systematic review methods prior to the project, including what is and is not included in the search.

Systematic reviews are often used as the foundation for a meta analysis (a statistical process that combines the findings from individual studies) and to re-evaluate clinical guidelines.

Systematic review and meta analysis are both types of evidence synthesis methods. Read more about evidence synthesis on the Types of Reviews page of this guide.

Advanced Literature Searching in the Health Sciences MOOC (Massive Open Online Course)

  • Covers Fundamental components of advanced literature searching for projects such as systematic reviews, scoping reviews, and clinical practice guidelines
  • 9 weeks of content with approximately 2-4 hours of effort per week
  • Free with verified certificate available for a fee
  • Designed by  ​ Informationists at the University of Michigan Taubman Health Sciences Library

Introduction to Systematic Review and Meta-Analysis MOOC

  • Introduces methods and processes for systematic reviews and meta-analyses
  • 6 weeks of content with approximately 1-3 hours of effort per week
  • Free with certificate available for a fee
  • Offered by Johns Hopkins University

The figure 1 below gives a high-level overview of the stages of the meta-analysis process. Related evidence synthesis methods may omit steps in the meta-analysis process; for example, systematic reviews will not include Step 14 meta-analyze.

image shows the steps in the process of conducting a systematic review

Note the iterative nature of the process as search updates are conducted later in the project at Step 13 (an arrow on the left connects to Step 6 de-duplicate).

While this figure highlights appraisal of relevance in Steps 7 (screen abstracts) and 9 (screen full text), guidelines recommend critical appraisal of the individual study's validity and results once it is selected for inclusion.

1 Tsafnet, G., Glasziou, P., Choong, M.K., et al. Systematic review automation technologies. Systematic Reviews 2014; 3:74;   http://www.systematicreviewsjournal.com/content/3/1/74 . (adaptation of original image)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

How-to conduct a systematic literature review: A quick guide for computer science research

Angela carrera-rivera.

a Faculty of Engineering, Mondragon University

William Ochoa

Felix larrinaga.

b Design Innovation Center(DBZ), Mondragon University

Associated Data

  • No data was used for the research described in the article.

Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following:

  • • Clearly defined strategies to follow for a systematic literature review in computer science research, and
  • • Algorithmic method to tackle a systematic literature review.

Graphical abstract

Image, graphical abstract

Specifications table

Method details

A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12] . An SLR updates the reader with current literature about a subject [6] . The goal is to review critical points of current knowledge on a topic about research questions to suggest areas for further examination [5] . Defining an “Initial Idea” or interest in a subject to be studied is the first step before starting the SLR. An early search of the relevant literature can help determine whether the topic is too broad to adequately cover in the time frame and whether it is necessary to narrow the focus. Reading some articles can assist in setting the direction for a formal review., and formulating a potential research question (e.g., how is semantics involved in Industry 4.0?) can further facilitate this process. Once the focus has been established, an SLR can be undertaken to find more specific studies related to the variables in this question. Although there are multiple approaches for performing an SLR ( [5] , [26] , [27] ), this work aims to provide a step-by-step and practical guide while citing useful examples for computer-science research. The methodology presented in this paper comprises two main phases: “Planning” described in section 2, and “Conducting” described in section 3, following the depiction of the graphical abstract.

Defining the protocol is the first step of an SLR since it describes the procedures involved in the review and acts as a log of the activities to be performed. Obtaining opinions from peers while developing the protocol, is encouraged to ensure the review's consistency and validity, and helps identify when modifications are necessary [20] . One final goal of the protocol is to ensure the replicability of the review.

Define PICOC and synonyms

The PICOC (Population, Intervention, Comparison, Outcome, and Context) criteria break down the SLR's objectives into searchable keywords and help formulate research questions [ 27 ]. PICOC is widely used in the medical and social sciences fields to encourage researchers to consider the components of the research questions [14] . Kitchenham & Charters [6] compiled the list of PICOC elements and their corresponding terms in computer science, as presented in Table 1 , which includes keywords derived from the PICOC elements. From that point on, it is essential to think of synonyms or “alike” terms that later can be used for building queries in the selected digital libraries. For instance, the keyword “context awareness” can also be linked to “context-aware”.

Planning Step 1 “Defining PICOC keywords and synonyms”.

Formulate research questions

Clearly defined research question(s) are the key elements which set the focus for study identification and data extraction [21] . These questions are formulated based on the PICOC criteria as presented in the example in Table 2 (PICOC keywords are underlined).

Research questions examples.

Select digital library sources

The validity of a study will depend on the proper selection of a database since it must adequately cover the area under investigation [19] . The Web of Science (WoS) is an international and multidisciplinary tool for accessing literature in science, technology, biomedicine, and other disciplines. Scopus is a database that today indexes 40,562 peer-reviewed journals, compared to 24,831 for WoS. Thus, Scopus is currently the largest existing multidisciplinary database. However, it may also be necessary to include sources relevant to computer science, such as EI Compendex, IEEE Xplore, and ACM. Table 3 compares the area of expertise of a selection of databases.

Planning Step 3 “Select digital libraries”. Description of digital libraries in computer science and software engineering.

Define inclusion and exclusion criteria

Authors should define the inclusion and exclusion criteria before conducting the review to prevent bias, although these can be adjusted later, if necessary. The selection of primary studies will depend on these criteria. Articles are included or excluded in this first selection based on abstract and primary bibliographic data. When unsure, the article is skimmed to further decide the relevance for the review. Table 4 sets out some criteria types with descriptions and examples.

Planning Step 4 “Define inclusion and exclusion criteria”. Examples of criteria type.

Define the Quality Assessment (QA) checklist

Assessing the quality of an article requires an artifact which describes how to perform a detailed assessment. A typical quality assessment is a checklist that contains multiple factors to evaluate. A numerical scale is used to assess the criteria and quantify the QA [22] . Zhou et al. [25] presented a detailed description of assessment criteria in software engineering, classified into four main aspects of study quality: Reporting, Rigor, Credibility, and Relevance. Each of these criteria can be evaluated using, for instance, a Likert-type scale [17] , as shown in Table 5 . It is essential to select the same scale for all criteria established on the quality assessment.

Planning Step 5 “Define QA assessment checklist”. Examples of QA scales and questions.

Define the “Data Extraction” form

The data extraction form represents the information necessary to answer the research questions established for the review. Synthesizing the articles is a crucial step when conducting research. Ramesh et al. [15] presented a classification scheme for computer science research, based on topics, research methods, and levels of analysis that can be used to categorize the articles selected. Classification methods and fields to consider when conducting a review are presented in Table 6 .

Planning Step 6 “Define data extraction form”. Examples of fields.

The data extraction must be relevant to the research questions, and the relationship to each of the questions should be included in the form. Kitchenham & Charters [6] presented more pertinent data that can be captured, such as conclusions, recommendations, strengths, and weaknesses. Although the data extraction form can be updated if more information is needed, this should be treated with caution since it can be time-consuming. It can therefore be helpful to first have a general background in the research topic to determine better data extraction criteria.

After defining the protocol, conducting the review requires following each of the steps previously described. Using tools can help simplify the performance of this task. Standard tools such as Excel or Google sheets allow multiple researchers to work collaboratively. Another online tool specifically designed for performing SLRs is Parsif.al 1 . This tool allows researchers, especially in the context of software engineering, to define goals and objectives, import articles using BibTeX files, eliminate duplicates, define selection criteria, and generate reports.

Build digital library search strings

Search strings are built considering the PICOC elements and synonyms to execute the search in each database library. A search string should separate the synonyms with the boolean operator OR. In comparison, the PICOC elements are separated with parentheses and the boolean operator AND. An example is presented next:

(“Smart Manufacturing” OR “Digital Manufacturing” OR “Smart Factory”) AND (“Business Process Management” OR “BPEL” OR “BPM” OR “BPMN”) AND (“Semantic Web” OR “Ontology” OR “Semantic” OR “Semantic Web Service”) AND (“Framework” OR “Extension” OR “Plugin” OR “Tool”

Gather studies

Databases that feature advanced searches enable researchers to perform search queries based on titles, abstracts, and keywords, as well as for years or areas of research. Fig. 1 presents the example of an advanced search in Scopus, using titles, abstracts, and keywords (TITLE-ABS-KEY). Most of the databases allow the use of logical operators (i.e., AND, OR). In the example, the search is for “BIG DATA” and “USER EXPERIENCE” or “UX” as a synonym.

Fig 1

Example of Advanced search on Scopus.

In general, bibliometric data of articles can be exported from the databases as a comma-separated-value file (CSV) or BibTeX file, which is helpful for data extraction and quantitative and qualitative analysis. In addition, researchers should take advantage of reference-management software such as Zotero, Mendeley, Endnote, or Jabref, which import bibliographic information onto the software easily.

Study Selection and Refinement

The first step in this stage is to identify any duplicates that appear in the different searches in the selected databases. Some automatic procedures, tools like Excel formulas, or programming languages (i.e., Python) can be convenient here.

In the second step, articles are included or excluded according to the selection criteria, mainly by reading titles and abstracts. Finally, the quality is assessed using the predefined scale. Fig. 2 shows an example of an article QA evaluation in Parsif.al, using a simple scale. In this scenario, the scoring procedure is the following YES= 1, PARTIALLY= 0.5, and NO or UNKNOWN = 0 . A cut-off score should be defined to filter those articles that do not pass the QA. The QA will require a light review of the full text of the article.

Fig 2

Performing quality assessment (QA) in Parsif.al.

Data extraction

Those articles that pass the study selection are then thoroughly and critically read. Next, the researcher completes the information required using the “data extraction” form, as illustrated in Fig. 3 , in this scenario using Parsif.al tool.

Fig 3

Example of data extraction form using Parsif.al.

The information required (study characteristics and findings) from each included study must be acquired and documented through careful reading. Data extraction is valuable, especially if the data requires manipulation or assumptions and inferences. Thus, information can be synthesized from the extracted data for qualitative or quantitative analysis [16] . This documentation supports clarity, precise reporting, and the ability to scrutinize and replicate the examination.

Analysis and Report

The analysis phase examines the synthesized data and extracts meaningful information from the selected articles [10] . There are two main goals in this phase.

The first goal is to analyze the literature in terms of leading authors, journals, countries, and organizations. Furthermore, it helps identify correlations among topic s . Even when not mandatory, this activity can be constructive for researchers to position their work, find trends, and find collaboration opportunities. Next, data from the selected articles can be analyzed using bibliometric analysis (BA). BA summarizes large amounts of bibliometric data to present the state of intellectual structure and emerging trends in a topic or field of research [4] . Table 7 sets out some of the most common bibliometric analysis representations.

Techniques for bibliometric analysis and examples.

Several tools can perform this type of analysis, such as Excel and Google Sheets for statistical graphs or using programming languages such as Python that has available multiple  data visualization libraries (i.e. Matplotlib, Seaborn). Cluster maps based on bibliographic data(i.e keywords, authors) can be developed in VosViewer which makes it easy to identify clusters of related items [18] . In Fig. 4 , node size is representative of the number of papers related to the keyword, and lines represent the links among keyword terms.

Fig 4

[1] Keyword co-relationship analysis using clusterization in vos viewer.

This second and most important goal is to answer the formulated research questions, which should include a quantitative and qualitative analysis. The quantitative analysis can make use of data categorized, labelled, or coded in the extraction form (see Section 1.6). This data can be transformed into numerical values to perform statistical analysis. One of the most widely employed method is frequency analysis, which shows the recurrence of an event, and can also represent the percental distribution of the population (i.e., percentage by technology type, frequency of use of different frameworks, etc.). Q ualitative analysis includes the narration of the results, the discussion indicating the way forward in future research work, and inferring a conclusion.

Finally, the literature review report should state the protocol to ensure others researchers can replicate the process and understand how the analysis was performed. In the protocol, it is essential to present the inclusion and exclusion criteria, quality assessment, and rationality beyond these aspects.

The presentation and reporting of results will depend on the structure of the review given by the researchers conducting the SLR, there is no one answer. This structure should tie the studies together into key themes, characteristics, or subgroups [ 28 ].

SLR can be an extensive and demanding task, however the results are beneficial in providing a comprehensive overview of the available evidence on a given topic. For this reason, researchers should keep in mind that the entire process of the SLR is tailored to answer the research question(s). This article has detailed a practical guide with the essential steps to conducting an SLR in the context of computer science and software engineering while citing multiple helpful examples and tools. It is envisaged that this method will assist researchers, and particularly early-stage researchers, in following an algorithmic approach to fulfill this task. Finally, a quick checklist is presented in Appendix A as a companion of this article.

CRediT author statement

Angela Carrera-Rivera: Conceptualization, Methodology, Writing-Original. William Ochoa-Agurto : Methodology, Writing-Original. Felix Larrinaga : Reviewing and Supervision Ganix Lasa: Reviewing and Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

Funding : This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant No. 814078.

Carrera-Rivera, A., Larrinaga, F., & Lasa, G. (2022). Context-awareness for the design of Smart-product service systems: Literature review. Computers in Industry, 142, 103730.

1 https://parsif.al/

Data Availability

How-to conduct a systematic literature review: A quick guide for computer science research

Affiliations.

  • 1 Faculty of Engineering, Mondragon University.
  • 2 Design Innovation Center(DBZ), Mondragon University.
  • PMID: 36405369
  • PMCID: PMC9672331
  • DOI: 10.1016/j.mex.2022.101895

Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following:•Clearly defined strategies to follow for a systematic literature review in computer science research, and•Algorithmic method to tackle a systematic literature review.

Keywords: Systematic literature reviews; computer science; doctoral studies; literature reviews; research methodology.

© 2022 The Author(s).

Jump to navigation

Home

Cochrane Training

Cochrane handbook for systematic reviews of interventions.

Cochrane Handbook for Systematic Reviews of Interventions

  • Access the Cochrane Handbook for Systematic Reviews of Interventions
  • About the  Handbook

Methodological Expectations for Cochrane Intervention Reviews (MECIR)

Contact the editors, how to cite the handbook, permission to re-use material from the handbook, previous versions, access the cochrane handbook  for systematic reviews of interventions.

Open the online Handbook Download PDFs (restricted) Buy the book

Back to top

About the Handbook

The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. All authors should consult the Handbook for guidance on the methods used in Cochrane systematic reviews. The Handbook includes guidance on the standard methods applicable to every review (planning a review, searching and selecting studies, data collection, risk of bias assessment, statistical analysis, GRADE and interpreting results), as well as more specialised topics (non-randomized studies, adverse effects, complex interventions, equity, economics, patient-reported outcomes, individual patient data, prospective meta-analysis, and qualitative research).

Last updated: 22 August, 2023

Key aspects of Handbook guidance are collated as the Methodological Expectations for Cochrane Intervention Reviews (MECIR). These provide core standards that are generally expected of Cochrane reviews. Each MECIR item includes a link to a relevant Handbook chapter.

For further information and for any Handbook enquiries please contact: [email protected] .

The Handbook editorial team includes: Julian Higgins and James Thomas (Senior Scientific Editors); Jacqueline Chandler, Miranda Cumpston,  Tianjing Li , Matthew Page and Vivian Welch (Associate Scientific Editors); Ella Flemyng (Managing Editor).

To cite the full Handbook online, please use:

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook.

To cite the print edition of the Handbook,  please use:

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions . 2nd Edition. Chichester (UK): John Wiley & Sons, 2019.

Details of how to cite individual chapters in either of these versions are available in each chapter.

Academic or other non-commercial re-use of Handbook material You do not need to request permission to use short quotations (though these must be appropriately cited), or to cite the Handbook as a source. See How to cite the Handbook . If you intend to reproduce material from the Handbook using screenshots, including exact figures or tables from the Handbook or including lengthy direct quotations (more than 5 lines of text), then please fill in this form to request permission to re-use material from the Handbook . This will be sent to the Cochrane Support team who will notify Julian Higgins or James Thomas, the Handbook Senior Editors, as appropriate. If approved, these requests will be granted free of charge on condition that the source is acknowledged.

Commercial re-use of Handbook material Commercial re-use includes any use of the Handbook that will be used in a product for which there is a monetary fee, and/or where it is associated in any way with any product or service. For all enquiries related to the commercial re-use of Handbook material please contact Wiley Global Permissions , John Wiley & Sons, Ltd.

Details on how the Handbook has changed compared to previous versions can be found on the Versions and changes   page. More information on the process for updating the Handbook can be found here . 

Archived copies of the following previous versions of the Handbook are available:

  • Version 6.3 : February 2022 [browsable] 
  • Version 6.2 : February 2021 [browsable] 
  • Version 6.1 : September 2020 [browsable]
  • Version 6.0 : July 2019 [browsable]
  • Version 5.2 : June 2017 [PDF]
  • Version 5.1: March 2011 [browsable]
  • Version 5.0.2: September 2009 [browsable]
  • Version 5.0.0: February 2008 [browsable]
  • Version 4.2.6: September 2006 [PDF] 2.8MB
  • Version 4.2.1 : December 2003 [PDF]

You may also be interested in:

  • Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy

Advertisement

Advertisement

How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions

  • Review Paper
  • Open access
  • Published: 12 May 2023
  • Volume 17 , pages 1899–1933, ( 2023 )

Cite this article

You have full access to this open access article

  • Philipp C. Sauer   ORCID: orcid.org/0000-0002-1823-0723 1 &
  • Stefan Seuring   ORCID: orcid.org/0000-0003-4204-9948 2  

19k Accesses

27 Citations

6 Altmetric

Explore all metrics

Systematic literature reviews (SLRs) have become a standard tool in many fields of management research but are often considerably less stringently presented than other pieces of research. The resulting lack of replicability of the research and conclusions has spurred a vital debate on the SLR process, but related guidance is scattered across a number of core references and is overly centered on the design and conduct of the SLR, while failing to guide researchers in crafting and presenting their findings in an impactful way. This paper offers an integrative review of the widely applied and most recent SLR guidelines in the management domain. The paper adopts a well-established six-step SLR process and refines it by sub-dividing the steps into 14 distinct decisions: (1) from the research question, via (2) characteristics of the primary studies, (3) to retrieving a sample of relevant literature, which is then (4) selected and (5) synthesized so that, finally (6), the results can be reported. Guided by these steps and decisions, prior SLR guidelines are critically reviewed, gaps are identified, and a synthesis is offered. This synthesis elaborates mainly on the gaps while pointing the reader toward the available guidelines. The paper thereby avoids reproducing existing guidance but critically enriches it. The 6 steps and 14 decisions provide methodological, theoretical, and practical guidelines along the SLR process, exemplifying them via best-practice examples and revealing their temporal sequence and main interrelations. The paper guides researchers in the process of designing, executing, and publishing a theory-based and impact-oriented SLR.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Avoid common mistakes on your manuscript.

1 Introduction

The application of systematic or structured literature reviews (SLRs) has developed into an established approach in the management domain (Kraus et al. 2020 ), with 90% of management-related SLRs published within the last 10 years (Clark et al. 2021 ). Such reviews help to condense knowledge in the field and point to future research directions, thereby enabling theory development (Fink 2010 ; Koufteros et al. 2018 ). SLRs have become an established method by now (e.g., Durach et al. 2017 ; Koufteros et al. 2018 ). However, many SLR authors struggle to efficiently synthesize and apply review protocols and justify their decisions throughout the review process (Paul et al. 2021 ) since only a few studies address and explain the respective research process and the decisions to be taken in this process. Moreover, the available guidelines do not form a coherent body of literature but focus on the different details of an SLR, while a comprehensive and detailed SLR process model is lacking. For example, Seuring and Gold ( 2012 ) provide some insights into the overall process, focusing on content analysis for data analysis without covering the practicalities of the research process in detail. Similarly, Durach et al. ( 2017 ) address SLRs from a paradigmatic perspective, offering a more foundational view covering ontological and epistemological positions. Durach et al. ( 2017 ) emphasize the philosophy of science foundations of an SLR. Although somewhat similar guidelines for SLRs might be found in the wider body of literature (Denyer and Tranfield 2009 ; Fink 2010 ; Snyder 2019 ), they often take a particular focus and are less geared toward explaining and reflecting on the single choices being made during the research process. The current body of SLR guidelines leaves it to the reader to find the right links among the guidelines and to justify their inconsistencies. This is critical since a vast number of SLRs are conducted by early-stage researchers who likely struggle to synthesize the existing guidance and best practices (Fisch and Block 2018 ; Kraus et al. 2020 ), leading to the frustration of authors, reviewers, editors, and readers alike.

Filling these gaps is critical in our eyes since researchers conducting literature reviews form the foundation of any kind of further analysis to position their research into the respective field (Fink 2010 ). So-called “systematic literature reviews” (e.g., Davis and Crombie 2001 ; Denyer and Tranfield 2009 ; Durach et al. 2017 ) or “structured literature reviews” (e.g., Koufteros et al. 2018 ; Miemczyk et al. 2012 ) differ from nonsystematic literature reviews in that the analysis of a certain body of literature becomes a means in itself (Kraus et al. 2020 ; Seuring et al. 2021 ). Although two different terms are used for this approach, the related studies refer to the same core methodological references that are also cited in this paper. Therefore, we see them as identical and abbreviate them as SLR.

There are several guidelines on such reviews already, which have been developed outside the management area (e.g. Fink 2010 ) or with a particular focus on one management domain (e.g., Kraus et al. 2020 ). SLRs aim at capturing the content of the field at a point in time but should also aim at informing future research (Denyer and Tranfield 2009 ), making follow-up research more efficient and productive (Kraus et al. 2021 ). Such standalone literature reviews would and should also prepare subsequent empirical or modeling research, but usually, they require far more effort and time (Fisch and Block 2018 ; Lim et al. 2022 ). To achieve this preparation, SLRs can essentially a) describe the state of the literature, b) test a hypothesis based on the available literature, c) extend the literature, and d) critique the literature (Xiao and Watson 2019 ). Beyond guiding the next incremental step in research, SLRs “may challenge established assumptions and norms of a given field or topic, recognize critical problems and factual errors, and stimulate future scientific conversations around that topic” (Kraus et al. 2022 , p. 2578). Moreover, they have the power to answer research questions that are beyond the scope of individual empirical or modeling studies (Snyder 2019 ) and to build, elaborate, and test theories beyond this single study scope (Seuring et al. 2021 ). These contributions of an SLR may be highly influential and therefore underline the need for high-quality planning, execution, and reporting of their process and details.

Regardless of the individual aims of standalone SLRs, their numbers have exponentially risen in the last two decades (Kraus et al. 2022 ) and almost all PhD or large research project proposals in the management domain include such a standalone SLR to build a solid foundation for their subsequent work packages. Standalone SLRs have thus become a key part of management research (Kraus et al. 2021 ; Seuring et al. 2021 ), which is also underlined by the fact that there are journals and special issues exclusively accepting standalone SLRs (Kraus et al. 2022 ; Lim et al. 2022 ).

However, SLRs require a commitment that is often comparable to an additional research process or project. Hence, SLRs should not be taken as a quick solution, as a simplistic, descriptive approach would usually not yield a publishable paper (see also Denyer and Tranfield 2009 ; Kraus et al. 2020 ).

Furthermore, as with other research techniques, SLRs are based on the rigorous application of rules and procedures, as well as on ensuring the validity and reliability of the method (Fisch and Block 2018 ; Seuring et al. 2021 ). In effect, there is a need to ensure “the same level of rigour to reviewing research evidence as should be used in producing that research evidence in the first place” (Davis and Crombie 2001 , p.1). This rigor holds for all steps of the research process, such as establishing the research question, collecting data, analyzing it, and making sense of the findings (Durach et al. 2017 ; Fink 2010 ; Seuring and Gold 2012 ). However, there is a high degree of diversity where some would be justified, while some papers do not report the full details of the research process. This lack of detail contrasts with an SLR’s aim of creating a valid map of the currently available research in the reviewed field, as critical information on the review’s completeness and potential reviewer biases cannot be judged by the reader or reviewer. This further impedes later replications or extensions of such reviews, which could provide longitudinal evidence of the development of a field (Denyer and Tranfield 2009 ; Durach et al. 2017 ). Against this observation, this paper addresses the following question:

Which decisions need to be made in an SLR process, and what practical guidelines can be put forward for making these decisions?

Answering this question, the key contributions of this paper are fourfold: (1) identifying the gaps in existing SLR guidelines, (2) refining the SLR process model by Durach et al. ( 2017 ) through 14 decisions, (3) synthesizing and enriching guidelines for these decisions, exemplifying the key decisions by means of best practice SLRs, and (4) presenting and discussing a refined SLR process model.

In some cases, we point to examples from operations and supply chain management. However, they illustrate the purposes discussed in the respective sections. We carefully checked that the arguments held for all fields of management-related research, and multiple examples from other fields of management were also included.

2 Identification of the need for an enriched process model, including a set of sequential decisions and their interrelations

In line with the exponential increase in SLR papers (Kraus et al. 2022 ), multiple SLR guidelines have recently been published. Since 2020, we have found a total of 10 papers offering guidelines on SLRs and other reviews for the field of management in general or some of its sub-fields. These guidelines are of double interest to this paper since we aim to complement them to fill the gap identified in the introduction while minimizing the doubling of efforts. Table 1 lists the 10 most recent guidelines and highlights their characteristics, research objectives, contributions, and how our paper aims to complement these previous contributions.

The sheer number and diversity of guideline papers, as well as the relevance expressed in them, underline the need for a comprehensive and exhaustive process model. At the same time, the guidelines take specific foci on, for example, updating earlier guidelines to new technological potentials (Kraus et al. 2020 ), clarifying the foundational elements of SLRs (Kraus et al. 2022 ) and proposing a review protocol (Paul et al. 2021 ) or the application and development of theory in SLRs (Seuring et al. 2021 ). Each of these foci fills an entire paper, while the authors acknowledge that much more needs to be considered in an SLR. Working through these most recent guidelines, it becomes obvious that the common paper formats in the management domain create a tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of individual process steps.

Our analysis in Table 1 evidences that there are a number of rich contributions on aspect b), while the aspect a) of SLR process models has not received the same attention despite the substantial confusion of authors toward them (Paul et al. 2021 ). In fact, only two of the most recent guidelines approach SLR process models. First, Kraus et al. ( 2020 ) incrementally extended the 20-year-old Tranfield et al. ( 2003 ) three-stage model into four stages. A little later, Paul et al. ( 2021 ) proposed a three-stage (including six sub-stages) SPAR-4-SLR review protocol. It integrates the PRISMA reporting items (Moher et al. 2009 ; Page et al. 2021 ) that originate from clinical research to define 14 actions stating what items an SLR in management needs to report for reasons of validity, reliability, and replicability. Almost naturally, these 14 reporting-oriented actions mainly relate to the first SLR stage of “assembling the literature,” which accounts for nine of the 14 actions. Since this protocol is published in a special issue editorial, its presentation and elaboration are somewhat limited by the already mentioned word count limit. Nevertheless, the SPAR-4-SLR protocol provides a very useful checklist for researchers that enables them to include all data required to document the SLR and to avoid confusion from editors, reviewers, and readers regarding SLR characteristics.

Beyond Table 1 , Durach et al. ( 2017 ) synthesized six common SLR “steps” that differ only marginally in the delimitation of one step to another from the sub-stages of the previously mentioned SLR processes. In addition, Snyder ( 2019 ) proposed a process comprising four “phases” that take more of a bird’s perspective in addressing (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Moreover, Xiao and Watson ( 2019 ) proposed only three “stages” of (1) planning, (2) conducting, and (3) reporting the review that combines the previously mentioned conduct and the analysis and defines eight steps within them. Much in line with the other process models, the final reporting stage contains only one of the eight steps, leaving the reader somewhat alone in how to effectively craft a manuscript that contributes to the further development of the field.

In effect, the mentioned SLR processes differ only marginally, while the systematic nature of actions in the SPAR-4-SLR protocol (Paul et al. 2021 ) can be seen as a reporting must-have within any of the mentioned SLR processes. The similarity of the SLR processes is, however, also evident in the fact that they leave open how the SLR analysis can be executed, enriched, and reflected to make a contribution to the reviewed field. In contrast, this aspect is richly described in the other guidelines that do not offer an SLR process, leading us again toward the tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of each process step.

To help (prospective) SLR authors successfully navigate this tension of existing guidelines, it is thus the ambition of this paper to adopt a comprehensive SLR process model along which an SLR project can be planned, executed, and written up in a coherent way. To enable this coherence, 14 distinct decisions are defined, reflected, and interlinked, which have to be taken across the different steps of the SLR process. At the same time, our process model aims to actively direct researchers to the best practices, tips, and guidance that previous guidelines have provided for individual decisions. We aim to achieve this by means of an integrative review of the relevant SLR guidelines, as outlined in the following section.

3 Methodology: an integrative literature review of guidelines for systematic literature reviews in management

It might seem intuitive to contribute to the debate on the “gold standard” of systematic literature reviews (Davis et al. 2014 ) by conducting a systematic review ourselves. However, there are different types of reviews aiming for distinctive contributions. Snyder ( 2019 ) distinguished between a) systematic, b) semi-systematic, and c) integrative (or critical) reviews, which aim for i) (mostly quantitative) synthesis and comparison of prior (primary) evidence, ii) an overview of the development of a field over time, and iii) a critique and synthesis of prior perspectives to reconceptualize or advance them. Each review team needs to position itself in such a typology of reviews to define the aims and scope of the review. To do so and structure the related research process, we adopted the four generic steps for an (integrative) literature review by Snyder ( 2019 )—(1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review—on which we report in the remainder of this section. Since the last step is a very practical one that, for example, asks, “Is the contribution of the review clearly communicated?” (Snyder 2019 ), we will focus on the presentation of the method applied to the initial three steps:

(1) Regarding the design, we see the need for this study emerging from our experience in reviewing SLR manuscripts, supervising PhD students who, almost by default, need to prepare an SLR, and recurring discussions on certain decisions in the process of both. These discussions regularly left some blank or blurry spaces (see Table 1 ) that induced substantial uncertainty regarding critical decisions in the SLR process (Paul et al. 2021 ). To address this gap, we aim to synthesize prior guidance and critically enrich it, thus adopting an integrative approach for reviewing existing SLR guidance in the management domain (Snyder 2019 ).

(2) To conduct the review, we started collecting the literature that provided guidance on the individual SLR parts. We built on a sample of 13 regularly cited or very recent papers in the management domain. We started with core articles that we successfully used to publish SLRs in top-tier OSCM journals, such as Tranfield et al. ( 2003 ) and Durach et al. ( 2017 ), and we checked their references and papers that cited these publications. The search focus was defined by the following criteria: the articles needed to a) provide original methodological guidance for SLRs by providing new aspects of the guideline or synthesizing existing ones into more valid guidelines and b) focus on the management domain. Building on the nature of a critical or integrative review that does not require a full or representative sample (Snyder 2019 ), we limited the sample to the papers displayed in Table 2 that built the core of the currently applied SLR guidelines. In effect, we found 11 technical papers and two SLRs of SLRs (Carter and Washispack 2018 ; Seuring and Gold 2012 ). From the latter, we mainly analyzed the discussion and conclusion parts that explicitly developed guidance on conducting SLRs.

(3) For analyzing these papers, we first adopted the six-step SLR process proposed by Durach et al. ( 2017 , p.70), which they define as applicable to any “field, discipline or philosophical perspective”. The contrast between the six-step SLR process used for the analysis and the four-step process applied by ourselves may seem surprising but is justified by the use of an integrative approach. This approach differs mainly in retrieving and selecting pertinent literature that is key to SLRs and thus needs to be part of the analysis framework.

While deductively coding the sample papers against Durach et al.’s ( 2017 ), guidance in the six steps, we inductively built a set of 14 decisions presented in the right columns of Table 2 that are required to be made in any SLR. These decisions built a second and more detailed level of analysis, for which the single guidelines were coded as giving low, medium, or high levels of detail (see Table 3 ), which helped us identify the gaps in the current guidance papers and led our way in presenting, critically discussing, and enriching the literature. In effect, we see that almost all guidelines touch on the same issues and try to give a comprehensive overview. However, this results in multiple guidelines that all lack the space to go into detail, while only a few guidelines focus on filling a gap in the process. It is our ambition with this analysis to identify the gaps in the guidelines, thereby identifying a precise need for refinement, and to offer a first step into this refinement. Adopting advice from the literature sample, the coding was conducted by the entire author team (Snyder 2019 ; Tranfield et al. 2003 ) including discursive alignments of interpretation (Seuring and Gold 2012 ). This enabled a certain reliability and validity of the analysis by reducing the within-study and expectancy bias (Durach et al. 2017 ), while the replicability was supported by reporting the review sample and the coding results in Table 3 (Carter and Washispack 2018 ).

(4) For the writing of the review, we only pointed to the unusual structure of presenting the method without a theory section and then the findings in the following section. However, this was motivated by the nature of the integrative review so that the review findings at the same time represent the “state of the art,” “literature review,” or “conceptualization” sections of a paper.

4 Findings of the integrative review: presentation, critical discussion, and enrichment of prior guidance

4.1 the overall research process for a systematic literature review.

Even within our sample of only 13 guidelines, there are four distinct suggestions for structuring the SLR process. One of the earliest SLR process models was proposed by Tranfield et al. ( 2003 ) encompassing the three stages of (1) planning the review, (2) conducting a review, and (3) reporting and dissemination. Snyder ( 2019 ) proposed four steps employed in this study: (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Borrowing from content analysis guidelines, Seuring and Gold ( 2012 ) defined four steps: (1) material collection, (2) descriptive analysis, (3) category selection, and (4) material evaluation. Most recently Kraus et al. ( 2020 ) proposed four steps: (1) planning the review, (2) identifying and evaluating studies, (3) extracting and synthesizing data, and (4) disseminating the review findings. Most comprehensively, Durach et al. ( 2017 ) condensed prior process models into their generic six steps for an SLR. Adding the review of the process models by Snyder ( 2019 ) and Seuring and Gold ( 2012 ) to Durach et al.’s ( 2017 ) SLR process review of four papers, we support their conclusion of the general applicability of the six steps defined. Consequently, these six steps form the backbone of our coding scheme, as shown in the left column of Table 2 and described in the middle column.

As stated in Sect.  3 , we synthesized the review papers against these six steps but experienced that the papers were taking substantially different foci by providing rich details for some steps while largely bypassing others. To capture this heterogeneity and better operationalize the SLR process, we inductively introduced the right column, identifying 14 decisions to be made. These decisions are all elaborated in the reviewed papers but to substantially different extents, as the detailed coding results in Table 3 underline.

Mapping Table 3 for potential gaps in the existing guidelines, we found six decisions on which we found only low- to medium-level details, while high-detail elaboration was missing. These six decisions, which are illustrated in Fig.  1 , belong to three steps: 1: defining the research question, 5: synthesizing the literature, and 6: reporting the results. This result underscores our critique of currently unbalanced guidance that is, on the one hand, detailed on determining the required characteristics of primary studies (step 2), retrieving a sample of potentially relevant literature (step 3), and selecting the pertinent literature (step 4). On the other hand, authors, especially PhD students, are left without substantial guidance on the steps critical to publication. Instead, they are called “to go one step further … and derive meaningful conclusions” (Fisch and Block 2018 , p. 105) without further operationalizations on how this can be achieved; for example, how “meet the editor” conference sessions regularly cause frustration among PhDs when editors call for “new,” “bold,” and “relevant” research. Filling the gaps in the six decisions with best practice examples and practical experience is the main focus of this study’s contribution. The other eight decisions are synthesized with references to the guidelines that are most helpful and relevant for the respective step in our eyes.

figure 1

The 6 steps and 14 decisions of the SLR process

4.2 Step 1: defining the research question

When initiating a research project, researchers make three key decisions.

Decision 1 considers the essential tasks of establishing a relevant and timely research question, but despite the importance of the decision, which determines large parts of further decisions (Snyder 2019 ; Tranfield et al. 2003 ), we only find scattered guidance in the literature. Hence, how can a research topic be specified to allow a strong literature review that is neither too narrow nor too broad? The latter is the danger in meta-reviews (i.e., reviews of reviews) (Aguinis et al. 2020 ; Carter and Washispack 2018 ; Kache and Seuring 2014 ). In this respect, even though the method would be robust, the findings would not be novel. In line with Carter and Washispack ( 2018 ), there should always be room for new reviews, yet over time, they must move from a descriptive overview of a field further into depth and provide detailed analyses of constructs. Clark et al. ( 2021 ) provided a detailed but very specific reflection on how they crafted a research question for an SLR and that revisiting the research question multiple times throughout the SLR process helps to coherently and efficiently move forward with the research. More generically, Kraus et al. ( 2020 ) listed six key contributions of an SLR that can guide the definition of the research question. Finally, Snyder ( 2019 ) suggested moving into more detail from existing SLRs and specified two main avenues for crafting an SLR research question that are either investigating the relationship among multiple effects, the effect of (a) specific variable(s), or mapping the evidence regarding a certain research area. For the latter, we see three possible alternative approaches, starting with a focus on certain industries. Examples are analyses of the food industry (Beske et al. 2014 ), retailing (Wiese et al. 2012 ), mining and minerals (Sauer and Seuring 2017 ), or perishable product supply chains (Lusiantoro et al. 2018 ) and traceability at the example of the apparel industry (Garcia-Torres et al. 2019 ). A second opportunity would be to assess the status of research in a geographical area that composes an interesting context from a research perspective, such as sustainable supply chain management (SSCM) in Latin America (Fritz and Silva 2018 ), yet this has to be justified explicitly, avoiding the fact that geographical focus is taken as the reason per se (e.g., Crane et al. 2016 ). A third variant addresses emerging issues, such as SCM, in a base-of-the-pyramid setting (Khalid and Seuring 2019 ) and the use of blockchain technology (Wang et al. 2019 ) or digital transformation (Hanelt et al. 2021 ). These approaches limit the reviewed field to enable a more contextualized analysis in which the novelty, continued relevance, or unjustified underrepresentation of the context can be used to specify a research gap and related research question(s). This also impacts the following decisions, as shown below.

Decision 2 concerns the option for a theoretical approach (i.e., the adoption of an inductive, abductive, or deductive approach) to theory building through the literature review. The review of previous guidance on this delivers an interesting observation. On the one hand, there are early elaborations on systematic reviews, realist synthesis, meta-synthesis, and meta-analysis by Tranfield et al. ( 2003 ) that are borrowing from the origins of systematic reviews in medical research. On the other hand, recent management-related guidelines largely neglect details of related decisions, but point out that SLRs are a suitable tool for theory building (Kraus et al. 2020 ). Seuring et al. ( 2021 ) set out to fill this gap and provided substantial guidance on how to use theory in SLRs to advance the field. To date, the option for a theoretical approach is only rarely made explicit, leaving the reader often puzzled about how advancement in theory has been crafted and impeding a review’s replicability (Seuring et al. 2021 ). Many papers still leave related choices in the dark (e.g., Rhaiem and Amara 2021 ; Rojas-Córdova et al. 2022 ) and move directly from the introduction to the method section.

In Decision 3, researchers need to adopt a theoretical framework (Durach et al. 2017 ) or at least a theoretical starting point, depending on the most appropriate theoretical approach (Seuring et al. 2021 ). Here, we find substantial guidance by Durach et al. ( 2017 ) that underlines the value of adopting a theoretical lens to investigate SCM phenomena and the literature. Moreover, the choice of a theoretical anchor enables a consistent definition and operationalization of constructs that are used to analyze the reviewed literature (Durach et al. 2017 ; Seuring et al. 2021 ). Hence, providing some upfront definitions is beneficial, clarifying what key terminology would be used in the subsequent paper, such as Devece et al. ( 2019 ) introduce their terminology on coopetition. Adding a practical hint beyond the elaborations of prior guidance papers for taking up established constructs in a deductive analysis (decision 2), there would be the question of whether these can yield interesting findings.

Here, it would be relevant to specify what kind of analysis is aimed for the SLR, where three approaches might be distinguished (i.e., bibliometric analysis, meta-analysis, and content analysis–based studies). Briefly distinguishing them, the core difference would be how many papers can be analyzed employing the respective method. Bibliometric analysis (Donthu et al. 2021 ) usually relies on the use of software, such as Biblioshiny, allowing the creation of figures on citations and co-citations. These figures enable the interpretation of large datasets in which several hundred papers can be analyzed in an automated manner. This allows for distinguishing among different research clusters, thereby following a more inductive approach. This would be contrasted by meta-analysis (e.g., Leuschner et al. 2013 ), where often a comparatively smaller number of papers is analyzed (86 in the respective case) but with a high number of observations (more than 17,000). The aim is to test for statistically significant correlations among single constructs, which requires that the related constructs and items be precisely defined (i.e., a clearly deductive approach to the analysis).

Content analysis is the third instrument frequently applied to data analysis, where an inductive or deductive approach might be taken (Seuring et al. 2021 ). Content-based analysis (see decision 9 in Sect.  4.6 ; Seuring and Gold 2012 ) is a labor-intensive step and can hardly be changed ex post. This also implies that only a certain number of papers might be analyzed (see Decision 6 in Sect.  4.5 ). It is advisable to adopt a wider set of constructs for the analysis stemming even from multiple established frameworks since it is difficult to predict which constructs and items might yield interesting insights. Hence, coding a more comprehensive set of items and dropping some in the process is less problematic than starting an analysis all over again for additional constructs and items. However, in the process of content analysis, such an iterative process might be required to improve the meaningfulness of the data and findings (Seuring and Gold 2012 ). A recent example of such an approach can be found in Khalid and Seuring ( 2019 ), building on the conceptual frameworks for SSCM of Carter and Rogers ( 2008 ), Seuring and Müller ( 2008 ), and Pagell and Wu ( 2009 ). This allows for an in-depth analysis of how SSCM constructs are inherently referred to in base-of-the-pyramid-related research. The core criticism and limitation of such an approach is the random and subjectively biased selection of frameworks for the purpose of analysis.

Beyond the aforementioned SLR methods, some reviews, similar to the one used here, apply a critical review approach. This is, however, nonsystematic, and not an SLR; thus, it is beyond the scope of this paper. Interested readers can nevertheless find some guidance on critical reviews in the available literature (e.g., Kraus et al. 2022 ; Snyder 2019 ).

4.3 Step 2: determining the required characteristics of primary studies

After setting the stage for the review, it is essential to determine which literature is to be reviewed in Decision 4. This topic is discussed by almost all existing guidelines and will thus only briefly be discussed here. Durach et al. ( 2017 ) elaborated in great detail on defining strict inclusion and exclusion criteria that need to be aligned with the chosen theoretical framework. The relevant units of analysis need to be specified (often a single paper, but other approaches might be possible) along with suitable research methods, particularly if exclusively empirical studies are reviewed or if other methods are applied. Beyond that, they elaborated on potential quality criteria that should be applied. The same is considered by a number of guidelines that especially draw on medical research, in which systematic reviews aim to pool prior studies to infer findings from their total population. Here, it is essential to ensure the exclusion of poor-quality evidence that would lower the quality of the review findings (Mulrow 1987 ; Tranfield et al. 2003 ). This could be ensured by, for example, only taking papers from journals listed on the Web of Science or Scopus or journals listed in quartile 1 of Scimago ( https://www.scimagojr.com/ ), a database providing citation and reference data for journals.

The selection of relevant publication years should again follow the purpose of the study defined in Step 1. As such, there might be a justified interest in the wide coverage of publication years if a historical perspective is taken. Alternatively, more contemporary developments or the analysis of very recent issues can justify the selection of very few years of publication (e.g., Kraus et al. 2022 ). Again, it is hard to specify a certain time period covered, but if developments of a field should be analyzed, a five-year period might be a typical lower threshold. On current topics, there is often a trend of rising publishing numbers. This scenario implies the rising relevance of a topic; however, this should be treated with caution. The total number of papers published per annum has increased substantially in recent years, which might account for the recently heightened number of papers on a certain topic.

4.4 Step 3: retrieving a sample of potentially relevant literature

After defining the required characteristics of the literature to be reviewed, the literature needs to be retrieved based on two decisions. Decision 5 concerns suitable literature sources and databases that need to be defined. Turning to Web of Science or Scopus would be two typical options found in many of the examples mentioned already (see also detailed guidance by Paul and Criado ( 2020 ) as well as Paul et al. ( 2021 )). These databases aggregate many management journals, and a typical argument for turning to the Web of Science database is the inclusion of impact factors, as they indicate a certain minimum quality of the journal (Sauer and Seuring 2017 ). Additionally, Google Scholar is increasingly mentioned as a usable search engine, often providing higher numbers of search results than the mentioned databases (e.g., Pearce 2018 ). These results often entail duplicates of articles from multiple sources or versions of the same article, as well as articles in predatory journals (Paul et al. 2021 ). Therefore, we concur with Paul et al. ( 2021 ) who underline the quality assurance mechanisms in Web of Science and Scopus, making them preferred databases for the literature search. From a practical perspective, it needs to be mentioned that SLRs in management mainly rely on databases that are not free to use. Against this limitation, Pearce ( 2018 ) provided a list of 20 search engines that are free of charge and elaborated on their advantages and disadvantages. Due to the individual limitations of the databases, it is advisable to use a combination of them (Kraus et al. 2020 , 2022 ) and build a consolidated sample by screening the papers found for duplicates, as regularly done in SLRs.

This decision also includes the choice of the types of literature to be analyzed. Typically, journal papers are selected, ensuring that the collected papers are peer-reviewed and have thus undergone an academic quality management process. Meanwhile, conference papers are usually avoided since they are often less mature and not checked for quality (e.g., Seuring et al. 2021 ). Nevertheless, for emerging topics, it might be too restrictive to consider only peer-reviewed journal articles and limit the literature to only a few references. Analyzing such rapidly emerging topics is relevant for timely and impact-oriented research and might justify the selection of different sources. Kraus et al. ( 2020 ) provided a discussion on the use of gray literature (i.e., nonacademic sources), and Sauer ( 2021 ) provided an example of a review of sustainability standards from a management perspective to derive implications for their application by managers on the one hand and for enhancing their applicability on the other hand.

Another popular way to limit the review sample is the restriction to a certain list of journals (Kraus et al. 2020 ; Snyder 2019 ). While this is sometimes favored by highly ranked journals, Carter and Washispack ( 2018 ), for example, found that many pertinent papers are not necessarily published in journals within the field. Webster and Watson ( 2002 ) quite tellingly cited a reviewer labeling the selection of top journals as an unjustified excuse for investigating the full body of relevant literature. Both aforementioned guidelines thus discourage the restriction to particular journals, a guidance that we fully support.

However, there is an argument to be made supporting the exclusion of certain lower-ranked journals. This can be done, for example, by using Scimago Journal quartiles ( www.scimagojr.com , last accessed 13. of April 2023) and restricting it to journals in the first quartile (e.g., Yavaprabhas et al. 2022 ). Other papers (e.g., Kraus et al. 2021 ; Rojas-Córdova et al. 2022 ) use certain journal quality lists to limit their sample. However, we argue for a careful check by the authors against the topic reviewed regarding what would be included and excluded.

Decision 6 entails the definition of search terms and a search string to be applied in the database just chosen. The search terms should reflect the aims of the review and the exclusion criteria that might be derived from the unit of analysis and the theoretical framework (Durach et al. 2017 ; Snyder 2019 ). Overall, two approaches to keywords can be observed. First, some guides suggest using synonyms of the key terms of interest (e.g., Durach et al. 2017 ; Kraus et al. 2020 ) in order to build a wide baseline sample that will be condensed in the next step. This is, of course, especially helpful if multiple terms delimitate a field together or different synonymous terms are used in parallel in different fields or journals. Empirical journals in supply chain management, for example, use the term “multiple supplier tiers ” (e.g., Tachizawa and Wong 2014 ), while modeling journals in the same field label this as “multiple supplier echelons ” (e.g., Brandenburg and Rebs 2015 ). Second, in some cases, single keywords are appropriate for capturing a central aspect or construct of a field if the single keyword has a global meaning tying this field together. This approach is especially relevant to the study of relatively broad terms, such as “social media” (Lim and Rasul 2022 ). However, this might result in very high numbers of publications found and therefore requires a purposeful combination with other search criteria, such as specific journals (Kraus et al. 2021 ; Lim et al. 2021 ), publication dates, article types, research methods, or the combination with keywords covering domains to which the search is aimed to be specified.

Since SLRs are often required to move into detail or review the intersections of relevant fields, we recommend building groups of keywords (single terms or multiple synonyms) for each field to be connected that are coupled via Boolean operators. To determine when a point of saturation for a keyword group is reached, one could monitor the increase in papers found in a database when adding another synonym. Once the increase is significantly decreasing or even zeroing, saturation is reached (Sauer and Seuring 2017 ). The keywords themselves can be derived from the list of keywords of influential publications in the field, while attention should be paid to potential synonyms in neighboring fields (Carter and Washispack 2018 ; Durach et al. 2017 ; Kraus et al. 2020 ).

4.5 Step 4: selecting the pertinent literature

The inclusion and exclusion criteria (Decision 6) are typically applied in Decision 7 in a two-stage process, first on the title, abstract, and keywords of an article before secondly applying them to the full text of the remaining articles (see also Kraus et al. 2020 ; Snyder 2019 ). Beyond this, Durach et al. ( 2017 ) underlined that the pertinence of the publication regarding units of analysis and the theoretical framework needs to be critically evaluated in this step to avoid bias in the review analysis. Moreover, Carter and Washispack ( 2018 ) requested the publication of the included and excluded sources to ensure the replicability of Steps 3 and 4. This can easily be done as an online supplement to an eventually published review article.

Nevertheless, the question remains: How many papers justify a literature review? While it is hard to specify how many papers comprise a body of literature, there might be certain thresholds for which Kraus et al. ( 2020 ) provide a useful discussion. As a rough guide, more than 50 papers would usually make a sound starting point (see also Paul and Criado 2020 ), while there are SLRs on emergent topics, such as multitier supply chain management, where 39 studies were included (Tachizawa and Wong 2014 ). An SLR on “learning from innovation failures” builds on 36 papers (Rhaiem and Amara 2021 ), which we would see as the lower threshold. However, such a low number should be an exception, and anything lower would certainly trigger the following question: Why is a review needed? Meanwhile, there are also limits on how many papers should be reviewed. While there are cases with 191 (Seuring and Müller 2008 ), 235 (Rojas-Córdova et al. 2022 ), or up to nearly 400 papers reviewed (Spens and Kovács 2006 ), these can be regarded as upper thresholds. Over time, similar topics seem to address larger datasets.

4.6 Step 5: synthesizing the literature

Before synthesizing the literature, Decision 8 considers the selection of a data extraction tool for which we found surprisingly little guidance. Some guidance is given on the use of cloud storage to enable remote team work (Clark et al. 2021 ). Beyond this, we found that SLRs have often been compiled with marked and commented PDFs or printed papers that were accompanied by tables (Kraus et al. 2020 ) or Excel sheets (see also the process tips by Clark et al. 2021 ). This sheet tabulated the single codes derived from the theoretical framework (Decision 3) and the single papers to be reviewed (Decision 7) by crossing out individual cells, signaling the representation of a particular code in a particular paper. While the frequency distribution of the codes is easily compiled from this data tool, the related content needs to be looked at in the papers in a tedious back-and-forth process. Beyond that, we would strongly recommend using data analysis software, such as MAXQDA or NVivo. Such programs enable the import of literature in PDF format and the automatic or manual coding of text passages, their comparison, and tabulation. Moreover, there is a permanent and editable reference of the coded text to a code. This enables a very quick compilation of content summaries or statistics for single codes and the identification of qualitative and quantitative links between codes and papers.

All the mentioned data extraction or data processing tools require a license and therefore are not free of cost. While many researchers may benefit from national or institutional subscriptions to these services, others may not. As a potential alternative, Pearce ( 2018 ) proposed a set of free open-source software (FOSS), including an elaboration on how they can be combined to perform an SLR. He also highlighted that both free and proprietary solutions have advantages and disadvantages that are worthwhile for those who do not have the required tools provided by their employers or other institutions they are members of. The same may apply to the literature databases used for the literature acquisition in Decision 5 (Pearce 2018 ).

Moreover, there is a link to Step 1, Decision 3, where bibliometric reviews and meta-analyses were mentioned. These methods, which are alternatives to content analysis–based approaches, have specific demands, so specific tools would be appropriate, such as the Biblioshiny software or VOSviewer. As we will point out for all decisions, there is a high degree of interdependence among the steps and decisions made.

Decision 9 looks at conducting the data analysis, such as coding against (pre-defined) constructs, in SLRs that rely, in most cases, on content analysis. Seuring and Gold ( 2012 ) elaborated in detail on its characteristics and application in SLRs. As this paper also explains the process of qualitative content analysis in detail, repetition is avoided here, but a summary is offered. Since different ways exist to conduct a content analysis, it is even more important to explain and justify, for example, the choice of an inductive or deductive approach (see Decision 2). In several cases, analytic variables are applied on the go, so there is no theory-based introduction of related constructs. However, to ensure the validity and replicability of the review (see Decision 11), it is necessary to explicitly define all the variables and codes used to analyze and synthesize the reviewed material (Durach et al. 2017 ; Seuring and Gold 2012 ). To build a valid framework as the SLR outcome, it is vital to ensure that the constructs used for the data analysis are sufficiently defined, mutually exclusive, and comprehensively exhaustive. For meta-analysis, the predefined constructs and items would demand quantitative coding so that the resulting data could be analyzed using statistical software tools such as SPSS or R (e.g., Xiao and Watson 2019 ). Pointing to bibliometric analysis again, the respective software would be used for data analysis, yielding different figures and paper clusters, which would then require interpretation (e.g., Donthu et al. 2021 ; Xiao and Watson 2019 ).

Decision 10, on conducting subsequent statistical analysis, considers follow-up analysis of the coding results. Again, this is linked to the chosen SLR method, and a bibliographic analysis will require a different statistical analysis than a content analysis–based SLR (e.g., Lim et al. 2022 ; Xiao and Watson 2019 ). Beyond the use of content analysis and the qualitative interpretation of its results, applying contingency analysis offers the opportunity to quantitatively assess the links among constructs and items. It provides insights into which items are correlated with each other without implying causality. Thus, the interpretation of the findings must explain the causality behind the correlations between the constructs and the items. This must be based on sound reasoning and linking the findings to theoretical arguments. For SLRs, there have recently been two kinds of applications of contingency analysis, differentiated by unit of analysis. De Lima et al. ( 2021 ) used the entire paper as the unit of analysis, deriving correlations on two constructs that were used together in one paper. This is, of course, subject to critique as to whether the constructs really represent correlated content. Moving a level deeper, Tröster and Hiete ( 2018 ) used single-text passages on one aspect, argument, or thought as the unit of analysis. Such an approach is immune against the critique raised before and can yield more valid statistical support for thematic analysis. Another recent methodological contribution employing the same contingency analysis–based approach was made by Siems et al. ( 2021 ). Their analysis employs constructs from SSCM and dynamic capabilities. Employing four subsets of data (i.e., two time periods each in the food and automotive industries), they showed that the method allows distinguishing among time frames as well as among industries.

However, the unit of analysis must be precisely explained so that the reader can comprehend it. Both examples use contingency analysis to identify under-researched topics and develop them into research directions whose formulation represents the particular aim of an SLR (Paul and Criado 2020 ; Snyder 2019 ). Other statistical tools might also be applied, such as cluster analysis. Interestingly, Brandenburg and Rebs ( 2015 ) applied both contingency and cluster analyses. However, the authors stated that the contingency analysis did not yield usable results, so they opted for cluster analysis. In effect, Brandenburg and Rebs ( 2015 ) added analytical depth to their analysis of model types in SSCM by clustering them against the main analytical categories of content analysis. In any case, the application of statistical tools needs to fit the study purpose (Decision 1) and the literature sample (Decision 7), just as in their more conventional applications (e.g., in empirical research processes).

Decision 11 regards the additional consideration of validity and reliability criteria and emphasizes the need for explaining and justifying the single steps of the research process (Seuring and Gold 2012 ), much in line with other examples of research (Davis and Crombie 2001 ). This is critical to underlining the quality of the review but is often neglected in many submitted manuscripts. In our review, we find rich guidance on this decision, to which we want to guide readers (see Table 3 ). In particular, Durach et al. ( 2017 ) provide an entire section of biases and what needs to be considered and reported on them. Moreover, Snyder ( 2019 ) regularly reflects on these issues in her elaborations. This rich guidance elaborates on how to ensure the quality of the individual steps of the review process, such as sampling, study inclusion and exclusion, coding, synthesizing, and more practical issues, including team composition and teamwork organization, which are discussed in some guidelines (e.g., Clark et al. 2021 ; Kraus et al. 2020 ). We only want to underline that the potential biases are, of course, to be seen in conjunction with Decisions 2, 3, 4, 5, 6, 7, 9, and 10. These decisions and the elaboration by Durach et al. ( 2017 ) should provide ample points of reflection that, however, many SLR manuscripts fail to address.

4.7 Step 6: reporting the results

In the final step, there are three decisions on which there is surprisingly little guidance, although reviews often fail in this critical part of the process (Kraus et al. 2020 ). The reviewed guidelines discuss the presentation almost exclusively, while almost no guidance is given on the overall paper structure or the key content to be reported.

Consequently, the first choice to be made in Decision 12 is regarding the paper structure. We suggest following the five-step logic of typical research papers (see also Fisch and Block 2018 ) and explaining only a few points in which a difference from other papers is seen.

(1) Introduction: While the introduction would follow a conventional logic of problem statement, research question, contribution, and outline of the paper (see also Webster and Watson 2002 ), the next parts might depend on the theoretical choices made in Decision 2.

(2) Literature review section: If deductive logic is taken, the paper usually has a conventional flow. After the introduction, the literature review section covers the theoretical background and the choice of constructs and variables for the analysis (De Lima et al. 2021 ; Dieste et al. 2022 ). To avoid confusion in this section with the literature review, its labeling can also be closer to the reviewed object.

If an inductive approach is applied, it might be challenging to present the theoretical basis up front, as the codes emerge only from analyzing the material. In this case, the theory section might be rather short, concentrating on defining the core concepts or terms used, for example, in the keyword-based search for papers. The latter approach is exemplified by the study at hand, which presents a short review of the available literature in the introduction and the first part of the findings. However, we do not perform a systematic but integrative review, which allows for more freedom and creativity (Snyder 2019 ).

(3) Method section: This section should cover the steps and follow the logic presented in this paper or any of the reviewed guidelines so that the choices made during the research process are transparently disclosed (Denyer and Tranfield 2009 ; Paul et al. 2021 ; Xiao and Watson 2019 ). In particular, the search for papers and their selection requires a sound explanation of each step taken, including the provision of reasons for the delimitation of the final paper sample. A stage that is often not covered in sufficient detail is data analysis (Seuring and Gold 2012 ). This also needs to be outlined so that the reader can comprehend how sense has been made of the material collected. Overall, the demands for SLR papers are similar to case studies, survey papers, or almost any piece of empirical research; thus, each step of the research process needs to be comprehensively described, including Decisions 4–10. This comprehensiveness must also include addressing measures for validity and reliability (see Decision 11) or other suitable measures of rigor in the research process since they are a critical issue in literature reviews (Durach et al. 2017 ). In particular, inductively conducted reviews are prone to subjective influences and thus require sound reporting of design choices and their justification.

(4) Findings: The findings typically start with a descriptive analysis of the literature covered, such as journals, distribution across years, or (empirical) methods applied (Tranfield et al. 2003 ). For modeling-related reviews, classifying papers against the approach chosen is a standard approach, but this can often also serve as an analytic category that provides detailed insights. The descriptive analysis should be kept short since a paper only presenting descriptive findings will not be of great interest to other researchers due to the missing contribution (Snyder 2019 ). Nevertheless, there are opportunities to provide interesting findings in the descriptive analysis. Beyond a mere description of the distributions of the single results, such as the distribution of methods used in the sample, authors should combine analytical categories to derive more detailed insights (see also Tranfield et al. 2003 ). The distribution of methods used might well be combined with the years of publication to identify and characterize different phases in the development of a field of research or its maturity. Moreover, there could be value in the analysis of theories applied in the review sample (e.g., Touboulic and Walker 2015 ; Zhu et al. 2022 ) and in reflecting on the interplay of different qualitative and quantitative methods in spurring the theoretical development of the reviewed field. This could yield detailed insights into methodological as well as theoretical gaps, and we would suggest explicitly linking the findings of such analyses to the research directions that an SLR typically provides. This link could help make the research directions much more tangible by giving researchers a clear indication of how to follow up on the findings, as, for example, done by Maestrini et al. ( 2017 ) or Dieste et al. ( 2022 ). In contrast to the mentioned examples of an actionable research agenda, a typical weakness of premature SLR manuscripts is that they ask rather superficially for more research in the different aspects they reviewed but remain silent about how exactly this can be achieved.

We would thus like to encourage future SLR authors to systematically investigate the potential to combine two categories of descriptive analysis to move this section of the findings to a higher level of quality, interest, and relevance. The same can, of course, be done with the thematic findings, which comprise the second part of this section.

Moving into the thematic analysis, we have already reached Decision 13 on the presentation of the refined theoretical framework and the discussion of its contents. A first step might present the frequencies of the codes or constructs applied in the analysis. This allows the reader to understand which topics are relevant. If a rather small body of literature is analyzed, tables providing evidence on which paper has been coded for which construct might be helpful in improving the transparency of the research process. Tables or other forms of visualization might help to organize the many codes soundly (see also Durach et al. 2017 ; Paul and Criado 2020 ; Webster and Watson 2002 ). These findings might then lead to interpretation, for which it is necessary to extract meaning from the body of literature and present it accordingly (Snyder 2019 ). To do so, it might seem needless to say that the researchers should refer back to Decisions 1, 2, and 3 taken in Step 1 and their justifications. These typically identify the research gap to be filled, but after the lengthy process of the SLR, the authors often fail to step back from the coding results and put them into a larger perspective against the research gap defined in Decision 1 (see also Clark et al. 2021 ). To support this, it is certainly helpful to illustrate the findings in a figure or graph presenting the links among the constructs and items and adding causal reasoning to this (Durach et al. 2017 ; Paul and Criado 2020 ), such as the three figures by Seuring and Müller ( 2008 ) or other examples by De Lima et al. ( 2021 ) or Tipu ( 2022 ). This presentation should condense arguments made in the assessed literature but should also chart the course for future research. It will be these parts of the paper that are decisive for a strong SLR paper.

Moreover, some guidelines define the most fruitful way of synthesizing the findings as concept-centric synthesis (Clark et al. 2021 ; Fisch and Block 2018 ; Webster and Watson 2002 ). As presented in the previous sentence, the presentation of the review findings is centered on the content or concept of “concept-centric synthesis.” It is accompanied by a reference to all or the most relevant literature in which the concept is evident. Contrastingly, Webster and Watson ( 2002 ) found that author-centric synthesis discusses individual papers and what they have done and found (just like this sentence here). They added that this approach fails to synthesize larger samples. We want to note that we used the latter approach in some places in this paper. However, this aims to actively refer the reader to these studies, as they stand out from our relatively small sample. Beyond this, we want to link back to Decision 3, the selection of a theoretical framework and constructs. These constructs, or the parts of a framework, can also serve to structure the findings section by using them as headlines for subsections (Seuring et al. 2021 ).

Last but not least, there might even be cases where core findings and relationships might be opposed, and alternative perspectives could be presented. This would certainly be challenging to argue for but worthwhile to do in order to drive the reviewed field forward. A related example is the paper by Zhu et al. ( 2022 ), who challenged the current debate at the intersection of blockchain applications and supply chain management and pointed to the limited use of theoretical foundations for related analysis.

(5) Discussion and Conclusion: The discussion needs to explain the contribution the paper makes to the extant literature, that is, which previous findings or hypotheses are supported or contradicted and which aspects of the findings are particularly interesting for the future development of the reviewed field. This is in line with the content required in the discussion sections of any other paper type. A typical structure might point to the contribution and put it into perspective with already existing research. Further, limitations should be addressed on both the theoretical and methodological sides. This elaboration of the limitations can be coupled with the considerations of the validity and reliability of the study in Decision 11. The implications for future research are a core aim of an SLR (Clark et al. 2021 ; Mulrow 1987 ; Snyder 2019 ) and should be addressed in a further part of the discussion section. Recently, a growing number of literature reviews have also provided research questions for future research that provide a very concrete and actionable output of the SLR (e.g. Dieste et al. 2022 ; Maestrini et al. 2017 ). Moreover, we would like to reiterate our call to clearly link the research implications to the SLR findings, which helps the authors craft more tangible research directions and helps the reader to follow the authors’ interpretation. Literature review papers are usually not strongly positioned toward managerial implications, but even these implications might be included.

As a kind of normal demand, the conclusion should provide an answer to the research question put forward in the introduction, thereby closing the cycle of arguments made in the paper.

Although all the works seem to be done when the paper is written and the contribution is fleshed out, there is still one major decision to be made. Decision 14 concerns the identification of an appropriate journal for submission. Despite the popularity of the SLR method, a rising number of journals explicitly limit the number of SLRs published by them. Moreover, there are only two guidelines elaborating on this decision, underlining the need for the following considerations.

Although it might seem most attractive to submit the paper to the highest-ranking journal for the reviewed topic, we argue for two critical and review-related decisions to be made during the research process that influence whether the paper fits a certain outlet:

The theoretical foundation of the SLR (Decision 3) usually relates to certain journals in which it is published or discussed. If a deductive approach was taken, the journals in which the foundational papers were published might be suitable since the review potentially contributes to the further validation or refinement of the frameworks. Overall, we need to keep in mind that a paper needs to be added to a discussion in the journal, and this can be based on the theoretical framework or the reviewed papers, as shown below.

Appropriate journals for publication can be derived from the analyzed journal papers (Decision 7) (see also Paul and Criado 2020 ). This allows for an easy link to the theoretical debate in the respective journal by submitting it. This choice is identifiable in most of the papers mentioned in this paper and is often illustrated in the descriptive analysis.

If the journal chosen for the submission was neither related to the theoretical foundation nor overly represented in the body of literature analyzed, an explicit justification in the paper itself might be needed. Alternatively, an explanation might be provided in the letter to the editor when submitting the paper. If such a statement is not presented, the likelihood of it being transferred into the review process and passing it is rather low. Finally, we want to refer readers interested in the specificities of the publication-related review process of SLRs to Webster and Watson ( 2002 ), who elaborated on this for Management Information Systems Quarterly.

5 Discussion and conclusion

Critically reviewing the currently available SLR guidelines in the management domain, this paper synthesizes 14 key decisions to be made and reported across the SLR research process. Guidelines are presented for each decision, including tasks that assist in making sound choices to complete the research process and make meaningful contributions. Applying these guidelines should improve the rigor and robustness of many review papers and thus enhance their contributions. Moreover, some practical hints and best-practice examples are provided on issues that unexperienced authors regularly struggle to present in a manuscript (Fisch and Block 2018 ) and thus frustrate reviewers, readers, editors, and authors alike.

Strikingly, the review of prior guidelines reported in Table 3 revealed their focus on the technical details that need to be reported in any SLR. Consequently, our discipline has come a long way in crafting search strings, inclusion, and exclusion criteria, and elaborating on the validity and reliability of an SLR. Nevertheless, we left critical areas underdeveloped, such as the identification of relevant research gaps and questions, data extraction tools, analysis of the findings, and a meaningful and interesting reporting of the results. Our study contributes to filling these gaps by providing operationalized guidance to SLR authors, especially early-stage researchers who craft SLRs at the outset of their research journeys. At the same time, we need to underline that our paper is, of course, not the only useful reference for SLR authors. Instead, the readers are invited to find more guidance on the many aspects to consider in an SLR in the references we provide within the single decisions, as well as in Tables 1 and 2 . The tables also identify the strongholds of other guidelines that our paper does not want to replace but connect and extend at selected occasions, especially in SLR Steps 5 and 6.

The findings regularly underline the interconnection of the 14 decisions identified and discussed in this paper. We thus support Tranfield et al. ( 2003 ) who requested a flexible approach to the SLR while clearly reporting all design decisions and reflecting their impacts. In line with the guidance synthesized in this review, and especially Durach et al. ( 2017 ), we also present a refined framework in Figs.  1 and 2 . It specifically refines the original six-step SLR process by Durach et al. ( 2017 ) in three ways:

figure 2

Enriched six-step process including the core interrelations of the 14 decisions

First, we subdivided the six steps into 14 decisions to enhance the operationalization of the process and enable closer guidance (see Fig.  1 ). Second, we added a temporal sequence to Fig.  2 by positioning the decisions from left to right according to this temporal sequence. This is based on systematically reflecting on the need to finish one decision before the following. If this need is evident, the following decision moves to the right; if not, the decisions are positioned below each other. Turning to Fig.  2 , it becomes evident that Step 2, “determining the required characteristics of primary studies,” and Step 3, “retrieving a sample of potentially relevant literature,” including their Decisions 4–6, can be conducted in an iterative manner. While this contrasts with the strict division of the six steps by Durach et al. ( 2017 ), it supports other guidance that suggests running pilot studies to iteratively define the literature sample, its sources, and characteristics (Snyder 2019 ; Tranfield et al. 2003 ; Xiao and Watson 2019 ). While this insight might suggest merging Steps 2 and 3, we refrain from this superficial change and building yet another SLR process model. Instead, we prefer to add detail and depth to Durach et al.’s ( 2017 ) model.

(Decisions: D1: specifying the research gap and related research question, D2: opting for a theoretical approach, D3: defining the core theoretical framework and constructs, D4: specifying inclusion and exclusion criteria, D5: defining sources and databases, D6: defining search terms and crafting a search string, D7: including and excluding literature for detailed analysis and synthesis, D8: selecting data extraction tool(s), D9: coding against (pre-defined) constructs, D10: conducting a subsequent (statistical) analysis (optional), D11: ensuring validity and reliability, D12: deciding on the structure of the paper, D13: presenting a refined theoretical framework and discussing its contents, and D14: deriving an appropriate journal from the analyzed papers).

This is also done through the third refinement, which underlines which previous or later decisions need to be considered within each single decision. Such a consideration moves beyond the mere temporal sequence of steps and decisions that does not reflect the full complexity of the SLR process. Instead, its focus is on the need to align, for example, the conduct of the data analysis (Decision 9) with the theoretical approach (Decision 2) and consequently ensure that the chosen theoretical framework and the constructs (Decision 3) are sufficiently defined for the data analysis (i.e., mutually exclusive and comprehensively exhaustive). The mentioned interrelations are displayed in Fig.  2 by means of directed arrows from one decision to another. The underlying explanations can be found in the earlier paper sections by searching for the individual decisions in the text on the impacted decisions. Overall, it is unsurprising to see that the vast majority of interrelations are directed from the earlier to the later steps and decisions (displayed through arrows below the diagonal of decisions), while only a few interrelations are inverse.

Combining the first refinement of the original framework (defining the 14 decisions) and the third refinement (revealing the main interrelations among the decisions) underlines the contribution of this study in two main ways. First, the centrality of ensuring validity and reliability (Decision 11) is underlined. It becomes evident that considerations of validity and reliability are central to the overall SLR process since all steps before the writing of the paper need to be revisited in iterative cycles through Decision 11. Any lack of related considerations will most likely lead to reviewer critique, putting the SLR publication at risk. On the positive side of this centrality, we also found substantial guidance on this issue. In contrast, as evidenced in Table 3 , there is a lack of prior guidance on Decisions 1, 8, 10, 12, 13, and 14, which this study is helping to fill. At the same time, these underexplained decisions are influenced by 14 of the 44 (32%) incoming arrows in Fig.  2 and influence the other decisions in 6 of the 44 (14%) instances. These interrelations among decisions to be considered when crafting an SLR were scattered across prior guidelines, lacked in-depth elaborations, and were hardly explicitly related to each other. Thus, we hope that our study and the refined SLR process model will help enhance the quality and contribution of future SLRs.

Data availablity

The data generated during this research is summarized in Table 3 and the analyzed papers are publicly available. They are clearly identified in Table 3 and the reference list.

Aguinis H, Ramani RS, Alabduljader N (2020) Best-practice recommendations for producers, evaluators, and users of methodological literature reviews. Organ Res Methods. https://doi.org/10.1177/1094428120943281

Article   Google Scholar  

Beske P, Land A, Seuring S (2014) Sustainable supply chain management practices and dynamic capabilities in the food industry: a critical analysis of the literature. Int J Prod Econ 152:131–143. https://doi.org/10.1016/j.ijpe.2013.12.026

Brandenburg M, Rebs T (2015) Sustainable supply chain management: a modeling perspective. Ann Oper Res 229:213–252. https://doi.org/10.1007/s10479-015-1853-1

Carter CR, Rogers DS (2008) A framework of sustainable supply chain management: moving toward new theory. Int Jnl Phys Dist Logist Manage 38:360–387. https://doi.org/10.1108/09600030810882816

Carter CR, Washispack S (2018) Mapping the path forward for sustainable supply chain management: a review of reviews. J Bus Logist 39:242–247. https://doi.org/10.1111/jbl.12196

Clark WR, Clark LA, Raffo DM, Williams RI (2021) Extending fisch and block’s (2018) tips for a systematic review in management and business literature. Manag Rev Q 71:215–231. https://doi.org/10.1007/s11301-020-00184-8

Crane A, Henriques I, Husted BW, Matten D (2016) What constitutes a theoretical contribution in the business and society field? Bus Soc 55:783–791. https://doi.org/10.1177/0007650316651343

Davis J, Mengersen K, Bennett S, Mazerolle L (2014) Viewing systematic reviews and meta-analysis in social research through different lenses. Springerplus 3:511. https://doi.org/10.1186/2193-1801-3-511

Davis HTO, Crombie IK (2001) What is asystematicreview? http://vivrolfe.com/ProfDoc/Assets/Davis%20What%20is%20a%20systematic%20review.pdf . Accessed 22 February 2019

De Lima FA, Seuring S, Sauer PC (2021) A systematic literature review exploring uncertainty management and sustainability outcomes in circular supply chains. Int J Prod Res. https://doi.org/10.1080/00207543.2021.1976859

Denyer D, Tranfield D (2009) Producing a systematic review. In: Buchanan DA, Bryman A (eds) The Sage handbook of organizational research methods. Sage Publications Ltd, Thousand Oaks, CA, pp 671–689

Google Scholar  

Devece C, Ribeiro-Soriano DE, Palacios-Marqués D (2019) Coopetition as the new trend in inter-firm alliances: literature review and research patterns. Rev Manag Sci 13:207–226. https://doi.org/10.1007/s11846-017-0245-0

Dieste M, Sauer PC, Orzes G (2022) Organizational tensions in industry 4.0 implementation: a paradox theory approach. Int J Prod Econ 251:108532. https://doi.org/10.1016/j.ijpe.2022.108532

Donthu N, Kumar S, Mukherjee D, Pandey N, Lim WM (2021) How to conduct a bibliometric analysis: an overview and guidelines. J Bus Res 133:285–296. https://doi.org/10.1016/j.jbusres.2021.04.070

Durach CF, Kembro J, Wieland A (2017) A new paradigm for systematic literature reviews in supply chain management. J Supply Chain Manag 53:67–85. https://doi.org/10.1111/jscm.12145

Fink A (2010) Conducting research literature reviews: from the internet to paper, 3rd edn. SAGE, Los Angeles

Fisch C, Block J (2018) Six tips for your (systematic) literature review in business and management research. Manag Rev Q 68:103–106. https://doi.org/10.1007/s11301-018-0142-x

Fritz MMC, Silva ME (2018) Exploring supply chain sustainability research in Latin America. Int Jnl Phys Dist Logist Manag 48:818–841. https://doi.org/10.1108/IJPDLM-01-2017-0023

Garcia-Torres S, Albareda L, Rey-Garcia M, Seuring S (2019) Traceability for sustainability: literature review and conceptual framework. Supp Chain Manag 24:85–106. https://doi.org/10.1108/SCM-04-2018-0152

Hanelt A, Bohnsack R, Marz D, Antunes Marante C (2021) A systematic review of the literature on digital transformation: insights and implications for strategy and organizational change. J Manag Stud 58:1159–1197. https://doi.org/10.1111/joms.12639

Kache F, Seuring S (2014) Linking collaboration and integration to risk and performance in supply chains via a review of literature reviews. Supp Chain Mnagmnt 19:664–682. https://doi.org/10.1108/SCM-12-2013-0478

Khalid RU, Seuring S (2019) Analyzing base-of-the-pyramid research from a (sustainable) supply chain perspective. J Bus Ethics 155:663–686. https://doi.org/10.1007/s10551-017-3474-x

Koufteros X, Mackelprang A, Hazen B, Huo B (2018) Structured literature reviews on strategic issues in SCM and logistics: part 2. Int Jnl Phys Dist Logist Manage 48:742–744. https://doi.org/10.1108/IJPDLM-09-2018-363

Kraus S, Breier M, Dasí-Rodríguez S (2020) The art of crafting a systematic literature review in entrepreneurship research. Int Entrep Manag J 16:1023–1042. https://doi.org/10.1007/s11365-020-00635-4

Kraus S, Mahto RV, Walsh ST (2021) The importance of literature reviews in small business and entrepreneurship research. J Small Bus Manag. https://doi.org/10.1080/00472778.2021.1955128

Kraus S, Breier M, Lim WM, Dabić M, Kumar S, Kanbach D, Mukherjee D, Corvello V, Piñeiro-Chousa J, Liguori E, Palacios-Marqués D, Schiavone F, Ferraris A, Fernandes C, Ferreira JJ (2022) Literature reviews as independent studies: guidelines for academic practice. Rev Manag Sci 16:2577–2595. https://doi.org/10.1007/s11846-022-00588-8

Leuschner R, Rogers DS, Charvet FF (2013) A meta-analysis of supply chain integration and firm performance. J Supply Chain Manag 49:34–57. https://doi.org/10.1111/jscm.12013

Lim WM, Rasul T (2022) Customer engagement and social media: revisiting the past to inform the future. J Bus Res 148:325–342. https://doi.org/10.1016/j.jbusres.2022.04.068

Lim WM, Yap S-F, Makkar M (2021) Home sharing in marketing and tourism at a tipping point: what do we know, how do we know, and where should we be heading? J Bus Res 122:534–566. https://doi.org/10.1016/j.jbusres.2020.08.051

Lim WM, Kumar S, Ali F (2022) Advancing knowledge through literature reviews: ‘what’, ‘why’, and ‘how to contribute.’ Serv Ind J 42:481–513. https://doi.org/10.1080/02642069.2022.2047941

Lusiantoro L, Yates N, Mena C, Varga L (2018) A refined framework of information sharing in perishable product supply chains. Int J Phys Distrib Logist Manag 48:254–283. https://doi.org/10.1108/IJPDLM-08-2017-0250

Maestrini V, Luzzini D, Maccarrone P, Caniato F (2017) Supply chain performance measurement systems: a systematic review and research agenda. Int J Prod Econ 183:299–315. https://doi.org/10.1016/j.ijpe.2016.11.005

Miemczyk J, Johnsen TE, Macquet M (2012) Sustainable purchasing and supply management: a structured literature review of definitions and measures at the dyad, chain and network levels. Supp Chain Mnagmnt 17:478–496. https://doi.org/10.1108/13598541211258564

Moher D, Liberati A, Tetzlaff J, Altman DG (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097

Mukherjee D, Lim WM, Kumar S, Donthu N (2022) Guidelines for advancing theory and practice through bibliometric research. J Bus Res 148:101–115. https://doi.org/10.1016/j.jbusres.2022.04.042

Mulrow CD (1987) The medical review article: state of the science. Ann Intern Med 106:485–488. https://doi.org/10.7326/0003-4819-106-3-485

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. J Clin Epidemiol 134:178–189. https://doi.org/10.1016/j.jclinepi.2021.03.001

Pagell M, Wu Z (2009) Building a more complete theory of sustainable supply chain management using case studies of 10 exemplars. J Supply Chain Manag 45:37–56. https://doi.org/10.1111/j.1745-493X.2009.03162.x

Paul J, Criado AR (2020) The art of writing literature review: What do we know and what do we need to know? Int Bus Rev 29:101717. https://doi.org/10.1016/j.ibusrev.2020.101717

Paul J, Lim WM, O’Cass A, Hao AW, Bresciani S (2021) Scientific procedures and rationales for systematic literature reviews (SPAR-4-SLR). Int J Consum Stud. https://doi.org/10.1111/ijcs.12695

Pearce JM (2018) How to perform a literature review with free and open source software. Pract Assess Res Eval 23:1–13

Rhaiem K, Amara N (2021) Learning from innovation failures: a systematic review of the literature and research agenda. Rev Manag Sci 15:189–234. https://doi.org/10.1007/s11846-019-00339-2

Rojas-Córdova C, Williamson AJ, Pertuze JA, Calvo G (2022) Why one strategy does not fit all: a systematic review on exploration–exploitation in different organizational archetypes. Rev Manag Sci. https://doi.org/10.1007/s11846-022-00577-x

Sauer PC (2021) The complementing role of sustainability standards in managing international and multi-tiered mineral supply chains. Resour Conserv Recycl 174:105747. https://doi.org/10.1016/j.resconrec.2021.105747

Sauer PC, Seuring S (2017) Sustainable supply chain management for minerals. J Clean Prod 151:235–249. https://doi.org/10.1016/j.jclepro.2017.03.049

Seuring S, Gold S (2012) Conducting content-analysis based literature reviews in supply chain management. Supp Chain Mnagmnt 17:544–555. https://doi.org/10.1108/13598541211258609

Seuring S, Müller M (2008) From a literature review to a conceptual framework for sustainable supply chain management. J Clean Prod 16:1699–1710. https://doi.org/10.1016/j.jclepro.2008.04.020

Seuring S, Yawar SA, Land A, Khalid RU, Sauer PC (2021) The application of theory in literature reviews: illustrated with examples from supply chain management. Int J Oper Prod Manag 41:1–20. https://doi.org/10.1108/IJOPM-04-2020-0247

Siems E, Land A, Seuring S (2021) Dynamic capabilities in sustainable supply chain management: an inter-temporal comparison of the food and automotive industries. Int J Prod Econ 236:108128. https://doi.org/10.1016/j.ijpe.2021.108128

Snyder H (2019) Literature review as a research methodology: an overview and guidelines. J Bus Res 104:333–339. https://doi.org/10.1016/j.jbusres.2019.07.039

Spens KM, Kovács G (2006) A content analysis of research approaches in logistics research. Int Jnl Phys Dist Logist Manage 36:374–390. https://doi.org/10.1108/09600030610676259

Tachizawa EM, Wong CY (2014) Towards a theory of multi-tier sustainable supply chains: a systematic literature review. Supp Chain Mnagmnt 19:643–663. https://doi.org/10.1108/SCM-02-2014-0070

Tipu SAA (2022) Organizational change for environmental, social, and financial sustainability: a systematic literature review. Rev Manag Sci 16:1697–1742. https://doi.org/10.1007/s11846-021-00494-5

Touboulic A, Walker H (2015) Theories in sustainable supply chain management: a structured literature review. Int Jnl Phys Dist Logist Manage 45:16–42. https://doi.org/10.1108/IJPDLM-05-2013-0106

Tranfield D, Denyer D, Smart P (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br J Manag 14:207–222. https://doi.org/10.1111/1467-8551.00375

Tröster R, Hiete M (2018) Success of voluntary sustainability certification schemes: a comprehensive review. J Clean Prod 196:1034–1043. https://doi.org/10.1016/j.jclepro.2018.05.240

Wang Y, Han JH, Beynon-Davies P (2019) Understanding blockchain technology for future supply chains: a systematic literature review and research agenda. Supp Chain Mnagmnt 24:62–84. https://doi.org/10.1108/SCM-03-2018-0148

Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Q 26:xiii–xxiii

Wiese A, Kellner J, Lietke B, Toporowski W, Zielke S (2012) Sustainability in retailing: a summative content analysis. Int J Retail Distrib Manag 40:318–335. https://doi.org/10.1108/09590551211211792

Xiao Y, Watson M (2019) Guidance on conducting a systematic literature review. J Plan Educ Res 39:93–112. https://doi.org/10.1177/0739456X17723971

Yavaprabhas K, Pournader M, Seuring S (2022) Blockchain as the “trust-building machine” for supply chain management. Ann Oper Res. https://doi.org/10.1007/s10479-022-04868-0

Zhu Q, Bai C, Sarkis J (2022) Blockchain technology and supply chains: the paradox of the atheoretical research discourse. Transp Res Part E Logist Transp Rev 164:102824. https://doi.org/10.1016/j.tre.2022.102824

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

EM Strasbourg Business School, Université de Strasbourg, HuManiS UR 7308, 67000, Strasbourg, France

Philipp C. Sauer

Chair of Supply Chain Management, Faculty of Economics and Management, The University of Kassel, Kassel, Germany

Stefan Seuring

You can also search for this author in PubMed   Google Scholar

Contributions

The article is based on the idea and extensive experience of SS. The literature search and data analysis has mainly been performed by PCS and supported by SS before the paper manuscript has been written and revised in a common effort of both authors.

Corresponding author

Correspondence to Stefan Seuring .

Ethics declarations

Conflict of interest.

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sauer, P.C., Seuring, S. How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions. Rev Manag Sci 17 , 1899–1933 (2023). https://doi.org/10.1007/s11846-023-00668-3

Download citation

Received : 29 September 2022

Accepted : 17 April 2023

Published : 12 May 2023

Issue Date : July 2023

DOI : https://doi.org/10.1007/s11846-023-00668-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodology
  • Replicability
  • Research process
  • Structured literature review
  • Systematic literature review

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research

Midwestern University Homepage

Faculty Research Resources: Systematic Review & Evidence Synthesis

  • Research Tools
  • Systematic Review & Evidence Synthesis

Types of Reviews

Evidence synthesis is any type of research methodology where researchers identify, select, and combine results from multiple studies.

This area of research is used to identify gaps in the evidence and establish a base for evidence-based decision-making.

Few common types of evidence synthesis are:

  • Systematic review
  • Literature (narrative) review
  • Scoping review
  • Rapid review
  • Meta-analysis
  • Literature Reviews Explained (LITR-EX)
  • Types of Evidence Synthesis - Evidence Synthesis Methods Interest Group

What is a Systematic Review?

Systematic reviews collect identify, appraise, and synthesize all available evidence to answer a well-formulated research question using explicit, reproducible methodology.

Key characteristics include: 

  • a systematic search in relevant subject databases
  • results independently screened according to pre-defined inclusion and exclusion criteria
  • critical appraisal of selected research studies
  • thorough presentation of the characteristics and findings of the included studies

Librarian Assistance

Librarians offer various levels of assistance with evidence synthesis. Refer to the library's policy for more information:

  • Policy for Librarian Searches for SRs and Other Complex Searches

Organizations with Evidence Synthesis Information

Cochrane, Scottish Intercollegiate Guidelines Network (SIGN), and Joanna Briggs Institute (JBI) are organizations that support evidence-based health decisions with evidence synthesis resources.

  • Cochrane Handbook
  • JBI Manual for Evidence Synthesis
  • JBI Scoping Review Network

Register a Protocol

It is best practice to register a systematic review with PROSPERO or another registry like Open Science Framework (OSF). 

  • Open Science Framework (OSF)

Protocol Templates & Guidance

  • PROSPERO Guidance
  • PRISMA-P - Protocols
  • Scoping Review Protocol: Guidance & Template - OSF
  • JBI Scoping Review Protocol Template

Systematic Review Databases & Searching

Systematic review searches are distinctive in their comprehensiveness.  Studies show that consulting with a librarian improve the quality of systematic review searches.

Librarians can advise on the most appropriate search strategies and databases.

Common systematic review databases include:

  • Cochrane Central Register of Controlled Trials

systematic literature review guide

Grey Literature

Grey literature includes conference papers, dissertations, theses, trial registers, and is not controlled by commercial publishing. 

A few repositories of grey literature are:

  • Clinical Trials.gov

Conducting a Systematic Review: An Overview of the Process from NNLM

Running Time (56:37)

Reporting Guidelines

P referred R eporting I tems for S ystematic R eviews and M eta- A nalyses (PRISMA) 

  • PRISMA: Statement, E&E, Checklist
  • PRISMA-NMA - Network Meta-Analyses

Covidence is a web-based software for screening and data extraction provided by Midwestern University Libraries. 

  • Covidence Registration *You must register for Covidence with a midwestern.edu email address*

Rayyan is an alternative to Covidence. It is a free online tool aiding in the screening and coding of studies for systematic reviews.

EndNote 21 is a reference management software that facilitates documenting record retrieval. 

  • Using EndNote 21 (Desktop) from Midwestern University
  • Using EndNote for Systematic Reviews from UCL

Critical Appraisal Tools

Evaluate studies for quality using assessment tools such as the Modified Downs & Black Checklist . Other examples of critical appraisal tools include: 

  • CATevaluation
  • Critical Appraisal Skills Checklists - CASP
  • Critical Appraisal Tools - JBI
  • Critical Appraisal Worksheets - CEBM
  • Risk of Bias 2 (RoB 2) tool
  • ROBINS-I tool
  • SIGN Checklists

Reading List

  • Boland A, Cherry MG, Dickson R. Doing a systematic review: a student's guide . 2nd ed. SAGE; 2017.
  • Foster M J, Jewell S T. Assembling the pieces of a systematic review: guide for librarians . Rowman & Littlefield; 2017.
  • Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J . 2009;26(2):91-108. doi:10.1111/j.1471-1842.2009.00848.x
  • Haile ZT. Critical Appraisal Tools and Reporting Guidelines. J Hum Lact . 2022;38(1):21-27. doi:10.1177/08903344211058374
  • Mak S, Thomas A. An Introduction to Scoping Reviews.  J Grad Med Educ . 2022;14(5):561-564. doi:10.4300/JGME-D-22-00620.1
  • PLoS Medicine Editors. Best practice in systematic reviews: the importance of protocols and registration. PLoS Med . 2011;8(2):e1001009. doi:10.1371/journal.pmed.1001009
  • Stellrecht E, Samuel A, Maggio LA. A reader’s guide to medical education systematic reviews. J Grad Méd Educ . 2022;14(2):176-177. doi:10.4300/jgme-d-22-00114.1
  • Conducting a Systematic Review: An Overview of the Process - NNLM (56:37)
  • Covidence Webinars
  • JBI Systematic Reviews Playlist
  • The Pieces of Systematic Review Series with Margaret Foster - NNLM SCR
  • Rayyan How-To Videos
  • << Previous: Copyright
  • Last Updated: Feb 26, 2024 11:17 AM
  • URL: https://library.midwestern.edu/faculty-research

Downers Grove Campus Library (630) 515-6200 Littlejohn Hall 555 31st St. Downers Grove, IL 60515 Map and Driving Directions - IL

Glendale Campus Library (623) 572-3308 Sahuaro Hall 19555 N. 59th Ave. Glendale, AZ 85308 Map and Driving Directions - AZ

Library Homepage

Doctor of Nursing Practice (DNP): A Guide to McFarlin Library Resources

  • Clinical Resources
  • Instructional Videos from McFarlin
  • What is the DNP Project?
  • Project Tips & Good Habits
  • Step 1: How do you know if you have the right idea?
  • Step 2: Who can I ask what the problem is?
  • Step 3: Who is going to be my DNP Project Champion outside of TU?
  • Step 4: How do I look at examples of previous DNP Projects?
  • Levels of Evidence

What is a systematic literature search?

What is a systematic review, creating a search strategy, identifying synonyms & related terms, search terms and boolean logic, search limits, using cinahl/mesh headings, pubmed trainings.

  • Grey Literature
  • Critical Appraisal Tools
  • Finding Questionnaires & Tools
  • Nursing Epidemiology & Statistics
  • Theories & Models
  • Data Management
  • Citation Management Tools
  • Tools for Writing and Publishing
  • Authorship & Plagarism
  • Institutional Review Board (IRB)
  • What's new? RSS feeds from selected nursing sites and blogs
  • Additional Resources

Systematic literature search requires you to organise and tackle the search process in a structured and preplanned manner. It demands careful consideration of your search terms, selection of databases, choice of search methods, and requires you to reflect on the search results obtained during the process.

With a systematic literature search, you have a greater chance of avoiding disparities and bias, and it enables you to identify gaps in the existing research. In this way you also minimise the risk of reproducing already existing research.

It is important that you document your searches during the process so that your searches are, in principle, reproducible.

Source:  Aarhus University Library

Within some fields there is an increasing tendency towards literature searches being required to be carried out as systematic reviews. There are specific requirements for a systematic review, and it is often a very time-consuming task. The following definition outlines the scope of a systematic review:

“ A systematic review is a structured and pre-planned synthesis of original studies involving predefined research question, inclusion criteria, search methods, selection procedure, quality assessment, data extraction, and data analysis. No original study should deliberately be excluded without explanation, and the results from each study should justify the conclusion. "

Source: Lund and Christensen (2016)

A well constructed search strategy is the core of your systematic review and will be reported on in the methods section of your paper. The search strategy retrieves the majority of the studies you will assess for eligibility & inclusion. The quality of the search strategy also affects what items may have been missed. McFarlin Librarians can be partners in this process.

Source:  University of Michigan Library

For a systematic review, it is important to broaden your search to maximize the retrieval of relevant results.

Use keywords:   How other people might describe a topic?

Identify the appropriate index terms  (subject headings) for your topic.

  • Index terms differ by database (MeSH, or  Medical Subject Headings ,  Emtree terms , Subject headings) are assigned by experts based on the article's content.
  • Check the indexing of sentinel articles (3-6 articles that are fundamental to your topic).  Sentinel articles can also be used to  test your search results.

Include spelling variations  (e.g.,  behavior,   behaviour ). 

Boolean Operators (Using AND, OR NOT)

There are three basic Boolean search commands:  AND ,  OR  and  NOT .

  • Example: (diabetes  AND  children  AND  processed foods) returns only results that contain all three search terms.
  • Example: ( diabetes OR  children OR  processed foods ) returns all items that contain any of the three search terms.
  • Example: searching (diabetes  NOT  obesity) returns items that are about diabetes, but will specifically  NOT  return items that contain the word obesity. 

Using Boolean Search with Exact Phrases

If you're searching for a phrase rather than just a single word, you can group the words together with quotation marks. Searching on "childhood diabetes" will return only items with that exact phrase.  

A typical database search  limit  allows you to narrow results so that you retrieve articles that are most relevant to your research question. Limit types vary by database & include:

  • Article/publication type
  • Publication dates

In a systematic review search, you should use care when applying limits, as you may lose articles inadvertently.

PubMed offers a wide range of online trainings aimed at the novice as well as the experienced researcher. 

To see a full list of their trainings visit their online training hub .

Suggested trainings: 

  • PubMed: Find articles on a topic  
  • PubMed Subject Search: How It Works  
  • PubMed: Using the Advanced Search Builder  
  • << Previous: Levels of Evidence
  • Next: Grey Literature >>
  • Last Updated: Feb 27, 2024 8:00 AM
  • URL: https://libraries.utulsa.edu/DNP
  • Systematic review
  • Open access
  • Published: 19 February 2024

‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice

  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 1 ,
  • Juan Baeza 2 ,
  • Alec Fraser   ORCID: orcid.org/0000-0003-1121-1551 2 &
  • Erik Persson 3  

Implementation Science volume  19 , Article number:  15 ( 2024 ) Cite this article

2042 Accesses

71 Altmetric

Metrics details

The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice.

We developed a comprehensive systematic literature search strategy based on the terms used in the previous reviews to identify studies that looked explicitly at interventions designed to turn research evidence into practice. The search was performed in June 2022 in four electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched from January 2010 up to June 2022 and applied no language restrictions. Two independent reviewers appraised the quality of included studies using a quality assessment checklist. To reduce the risk of bias, papers were excluded following discussion between all members of the team. Data were synthesised using descriptive and narrative techniques to identify themes and patterns linked to intervention strategies, targeted behaviours, study settings and study outcomes.

We identified 32 reviews conducted between 2010 and 2022. The reviews are mainly of multi-faceted interventions ( n  = 20) although there are reviews focusing on single strategies (ICT, educational, reminders, local opinion leaders, audit and feedback, social media and toolkits). The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Furthermore, a lot of nuance lies behind these headline findings, and this is increasingly commented upon in the reviews themselves.

Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been identified. We need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of research perspectives (including social science) in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed.

Peer Review reports

Contribution to the literature

Considerable time and money is invested in implementing and evaluating strategies to increase the implementation of research into clinical practice.

The growing body of evidence is not providing the anticipated clear lessons to support improved implementation.

Instead what is needed is better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice.

This would involve a more central role in implementation science for a wider range of perspectives, especially from the social, economic, political and behavioural sciences and for greater use of different types of synthesis, such as realist synthesis.

Introduction

The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice [ 1 , 2 ]. In recent years researchers have worked to improve the consistency in the ways in which these interventions (often called strategies) are described to support their evaluation. One notable development has been the emergence of Implementation Science as a field focusing explicitly on “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” ([ 3 ] p. 1). The work of implementation science focuses on closing, or at least narrowing, the gap between research and practice. One contribution has been to map existing interventions, identifying 73 discreet strategies to support research implementation [ 4 ] which have been grouped into 9 clusters [ 5 ]. The authors note that they have not considered the evidence of effectiveness of the individual strategies and that a next step is to understand better which strategies perform best in which combinations and for what purposes [ 4 ]. Other authors have noted that there is also scope to learn more from other related fields of study such as policy implementation [ 6 ] and to draw on methods designed to support the evaluation of complex interventions [ 7 ].

The increase in activity designed to support the implementation of research into practice and improvements in reporting provided the impetus for an update of a review of systematic reviews of the effectiveness of interventions designed to support the use of research in clinical practice [ 8 ] which was itself an update of the review conducted by Grimshaw and colleagues in 2001. The 2001 review [ 9 ] identified 41 reviews considering a range of strategies including educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings. They concluded that combined interventions were more likely to be effective than single interventions. The 2011 review identified a further 13 systematic reviews containing 313 discrete primary studies. Consistent with the previous review, four main strategy types were identified: audit and feedback; computerised decision support; opinion leaders; and multi-faceted interventions (MFIs). Nine of the reviews reported on MFIs. The review highlighted the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. MFIs claimed an improvement in effectiveness over single interventions, although effect sizes remained small to moderate and this improvement in effectiveness relating to MFIs has been questioned in a subsequent review [ 10 ]. In updating the review, we anticipated a larger pool of reviews and an opportunity to consolidate learning from more recent systematic reviews of interventions.

This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and Boaz, Baeza and Fraser [ 8 ] overview articles. To ensure optimal retrieval, our search strategy was refined with support from an expert university librarian, considering the ongoing improvements in the development of search filters for systematic reviews since our first review [ 11 ]. We also wanted to include technology-related terms (e.g. apps, algorithms, machine learning, artificial intelligence) to find studies that explored interventions based on the use of technological innovations as mechanistic tools for increasing the use of evidence into practice (see Additional file 1 : Appendix A for full search strategy).

The search was performed in June 2022 in the following electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched for articles published since the 2011 review. We searched from January 2010 up to June 2022 and applied no language restrictions. Reference lists of relevant papers were also examined.

We uploaded the results using EPPI-Reviewer, a web-based tool that facilitated semi-automation of the screening process and removal of duplicate studies. We made particular use of a priority screening function to reduce screening workload and avoid ‘data deluge’ [ 12 ]. Through machine learning, one reviewer screened a smaller number of records ( n  = 1200) to train the software to predict whether a given record was more likely to be relevant or irrelevant, thus pulling the relevant studies towards the beginning of the screening process. This automation did not replace manual work but helped the reviewer to identify eligible studies more quickly. During the selection process, we included studies that looked explicitly at interventions designed to turn research evidence into practice. Studies were included if they met the following pre-determined inclusion criteria:

The study was a systematic review

Search terms were included

Focused on the implementation of research evidence into practice

The methodological quality of the included studies was assessed as part of the review

Study populations included healthcare providers and patients. The EPOC taxonomy [ 13 ] was used to categorise the strategies. The EPOC taxonomy has four domains: delivery arrangements, financial arrangements, governance arrangements and implementation strategies. The implementation strategies domain includes 20 strategies targeted at healthcare workers. Numerous EPOC strategies were assessed in the review including educational strategies, local opinion leaders, reminders, ICT-focused approaches and audit and feedback. Some strategies that did not fit easily within the EPOC categories were also included. These were social media strategies and toolkits, and multi-faceted interventions (MFIs) (see Table  2 ). Some systematic reviews included comparisons of different interventions while other reviews compared one type of intervention against a control group. Outcomes related to improvements in health care processes or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews.

We excluded papers that:

Focused on changing patient rather than provider behaviour

Had no demonstrable outcomes

Made unclear or no reference to research evidence

The last of these criteria was sometimes difficult to judge, and there was considerable discussion amongst the research team as to whether the link between research evidence and practice was sufficiently explicit in the interventions analysed. As we discussed in the previous review [ 8 ] in the field of healthcare, the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case.

Reviewers employed a two-stage process to select papers for inclusion. First, all titles and abstracts were screened by one reviewer to determine whether the study met the inclusion criteria. Two papers [ 14 , 15 ] were identified that fell just before the 2010 cut-off. As they were not identified in the searches for the first review [ 8 ] they were included and progressed to assessment. Each paper was rated as include, exclude or maybe. The full texts of 111 relevant papers were assessed independently by at least two authors. To reduce the risk of bias, papers were excluded following discussion between all members of the team. 32 papers met the inclusion criteria and proceeded to data extraction. The study selection procedure is documented in a PRISMA literature flow diagram (see Fig.  1 ). We were able to include French, Spanish and Portuguese papers in the selection reflecting the language skills in the study team, but none of the papers identified met the inclusion criteria. Other non- English language papers were excluded.

figure 1

PRISMA flow diagram. Source: authors

One reviewer extracted data on strategy type, number of included studies, local, target population, effectiveness and scope of impact from the included studies. Two reviewers then independently read each paper and noted key findings and broad themes of interest which were then discussed amongst the wider authorial team. Two independent reviewers appraised the quality of included studies using a Quality Assessment Checklist based on Oxman and Guyatt [ 16 ] and Francke et al. [ 17 ]. Each study was rated a quality score ranging from 1 (extensive flaws) to 7 (minimal flaws) (see Additional file 2 : Appendix B). All disagreements were resolved through discussion. Studies were not excluded in this updated overview based on methodological quality as we aimed to reflect the full extent of current research into this topic.

The extracted data were synthesised using descriptive and narrative techniques to identify themes and patterns in the data linked to intervention strategies, targeted behaviours, study settings and study outcomes.

Thirty-two studies were included in the systematic review. Table 1. provides a detailed overview of the included systematic reviews comprising reference, strategy type, quality score, number of included studies, local, target population, effectiveness and scope of impact (see Table  1. at the end of the manuscript). Overall, the quality of the studies was high. Twenty-three studies scored 7, six studies scored 6, one study scored 5, one study scored 4 and one study scored 3. The primary focus of the review was on reviews of effectiveness studies, but a small number of reviews did include data from a wider range of methods including qualitative studies which added to the analysis in the papers [ 18 , 19 , 20 , 21 ]. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. In this section, we discuss the different EPOC-defined implementation strategies in turn. Interestingly, we found only two ‘new’ approaches in this review that did not fit into the existing EPOC approaches. These are a review focused on the use of social media and a review considering toolkits. In addition to single interventions, we also discuss multi-faceted interventions. These were the most common intervention approach overall. A summary is provided in Table  2 .

Educational strategies

The overview identified three systematic reviews focusing on educational strategies. Grudniewicz et al. [ 22 ] explored the effectiveness of printed educational materials on primary care physician knowledge, behaviour and patient outcomes and concluded they were not effective in any of these aspects. Koota, Kääriäinen and Melender [ 23 ] focused on educational interventions promoting evidence-based practice among emergency room/accident and emergency nurses and found that interventions involving face-to-face contact led to significant or highly significant effects on patient benefits and emergency nurses’ knowledge, skills and behaviour. Interventions using written self-directed learning materials also led to significant improvements in nurses’ knowledge of evidence-based practice. Although the quality of the studies was high, the review primarily included small studies with low response rates, and many of them relied on self-assessed outcomes; consequently, the strength of the evidence for these outcomes is modest. Wu et al. [ 20 ] questioned if educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes. Although based on evaluation projects and qualitative data, their results also suggest that positive changes on patient outcomes can be made following the implementation of specific evidence-based approaches (or projects). The differing positive outcomes for educational strategies aimed at nurses might indicate that the target audience is important.

Local opinion leaders

Flodgren et al. [ 24 ] was the only systemic review focusing solely on opinion leaders. The review found that local opinion leaders alone, or in combination with other interventions, can be effective in promoting evidence‐based practice, but this varies both within and between studies and the effect on patient outcomes is uncertain. The review found that, overall, any intervention involving opinion leaders probably improves healthcare professionals’ compliance with evidence-based practice but varies within and across studies. However, how opinion leaders had an impact could not be determined because of insufficient details were provided, illustrating that reporting specific details in published studies is important if diffusion of effective methods of increasing evidence-based practice is to be spread across a system. The usefulness of this review is questionable because it cannot provide evidence of what is an effective opinion leader, whether teams of opinion leaders or a single opinion leader are most effective, or the most effective methods used by opinion leaders.

Pantoja et al. [ 26 ] was the only systemic review focusing solely on manually generated reminders delivered on paper included in the overview. The review explored how these affected professional practice and patient outcomes. The review concluded that manually generated reminders delivered on paper as a single intervention probably led to small to moderate increases in adherence to clinical recommendations, and they could be used as a single quality improvement intervention. However, the authors indicated that this intervention would make little or no difference to patient outcomes. The authors state that such a low-tech intervention may be useful in low- and middle-income countries where paper records are more likely to be the norm.

ICT-focused approaches

The three ICT-focused reviews [ 14 , 27 , 28 ] showed mixed results. Jamal, McKenzie and Clark [ 14 ] explored the impact of health information technology on the quality of medical and health care. They examined the impact of electronic health record, computerised provider order-entry, or decision support system. This showed a positive improvement in adherence to evidence-based guidelines but not to patient outcomes. The number of studies included in the review was low and so a conclusive recommendation could not be reached based on this review. Similarly, Brown et al. [ 28 ] found that technology-enabled knowledge translation interventions may improve knowledge of health professionals, but all eight studies raised concerns of bias. The De Angelis et al. [ 27 ] review was more promising, reporting that ICT can be a good way of disseminating clinical practice guidelines but conclude that it is unclear which type of ICT method is the most effective.

Audit and feedback

Sykes, McAnuff and Kolehmainen [ 29 ] examined whether audit and feedback were effective in dementia care and concluded that it remains unclear which ingredients of audit and feedback are successful as the reviewed papers illustrated large variations in the effectiveness of interventions using audit and feedback.

Non-EPOC listed strategies: social media, toolkits

There were two new (non-EPOC listed) intervention types identified in this review compared to the 2011 review — fewer than anticipated. We categorised a third — ‘care bundles’ [ 36 ] as a multi-faceted intervention due to its description in practice and a fourth — ‘Technology Enhanced Knowledge Transfer’ [ 28 ] was classified as an ICT-focused approach. The first new strategy was identified in Bhatt et al.’s [ 30 ] systematic review of the use of social media for the dissemination of clinical practice guidelines. They reported that the use of social media resulted in a significant improvement in knowledge and compliance with evidence-based guidelines compared with more traditional methods. They noted that a wide selection of different healthcare professionals and patients engaged with this type of social media and its global reach may be significant for low- and middle-income countries. This review was also noteworthy for developing a simple stepwise method for using social media for the dissemination of clinical practice guidelines. However, it is debatable whether social media can be classified as an intervention or just a different way of delivering an intervention. For example, the review discussed involving opinion leaders and patient advocates through social media. However, this was a small review that included only five studies, so further research in this new area is needed. Yamada et al. [ 31 ] draw on 39 studies to explore the application of toolkits, 18 of which had toolkits embedded within larger KT interventions, and 21 of which evaluated toolkits as standalone interventions. The individual component strategies of the toolkits were highly variable though the authors suggest that they align most closely with educational strategies. The authors conclude that toolkits as either standalone strategies or as part of MFIs hold some promise for facilitating evidence use in practice but caution that the quality of many of the primary studies included is considered weak limiting these findings.

Multi-faceted interventions

The majority of the systematic reviews ( n  = 20) reported on more than one intervention type. Some of these systematic reviews focus exclusively on multi-faceted interventions, whilst others compare different single or combined interventions aimed at achieving similar outcomes in particular settings. While these two approaches are often described in a similar way, they are actually quite distinct from each other as the former report how multiple strategies may be strategically combined in pursuance of an agreed goal, whilst the latter report how different strategies may be incidentally used in sometimes contrasting settings in the pursuance of similar goals. Ariyo et al. [ 35 ] helpfully summarise five key elements often found in effective MFI strategies in LMICs — but which may also be transferrable to HICs. First, effective MFIs encourage a multi-disciplinary approach acknowledging the roles played by different professional groups to collectively incorporate evidence-informed practice. Second, they utilise leadership drawing on a wide set of clinical and non-clinical actors including managers and even government officials. Third, multiple types of educational practices are utilised — including input from patients as stakeholders in some cases. Fourth, protocols, checklists and bundles are used — most effectively when local ownership is encouraged. Finally, most MFIs included an emphasis on monitoring and evaluation [ 35 ]. In contrast, other studies offer little information about the nature of the different MFI components of included studies which makes it difficult to extrapolate much learning from them in relation to why or how MFIs might affect practice (e.g. [ 28 , 38 ]). Ultimately, context matters, which some review authors argue makes it difficult to say with real certainty whether single or MFI strategies are superior (e.g. [ 21 , 27 ]). Taking all the systematic reviews together we may conclude that MFIs appear to be more likely to generate positive results than single interventions (e.g. [ 34 , 45 ]) though other reviews should make us cautious (e.g. [ 32 , 43 ]).

While multi-faceted interventions still seem to be more effective than single-strategy interventions, there were important distinctions between how the results of reviews of MFIs are interpreted in this review as compared to the previous reviews [ 8 , 9 ], reflecting greater nuance and debate in the literature. This was particularly noticeable where the effectiveness of MFIs was compared to single strategies, reflecting developments widely discussed in previous studies [ 10 ]. We found that most systematic reviews are bounded by their clinical, professional, spatial, system, or setting criteria and often seek to draw out implications for the implementation of evidence in their areas of specific interest (such as nursing or acute care). Frequently this means combining all relevant studies to explore the respective foci of each systematic review. Therefore, most reviews we categorised as MFIs actually include highly variable numbers and combinations of intervention strategies and highly heterogeneous original study designs. This makes statistical analyses of the type used by Squires et al. [ 10 ] on the three reviews in their paper not possible. Further, it also makes extrapolating findings and commenting on broad themes complex and difficult. This may suggest that future research should shift its focus from merely examining ‘what works’ to ‘what works where and what works for whom’ — perhaps pointing to the value of realist approaches to these complex review topics [ 48 , 49 ] and other more theory-informed approaches [ 50 ].

Some reviews have a relatively small number of studies (i.e. fewer than 10) and the authors are often understandably reluctant to engage with wider debates about the implications of their findings. Other larger studies do engage in deeper discussions about internal comparisons of findings across included studies and also contextualise these in wider debates. Some of the most informative studies (e.g. [ 35 , 40 ]) move beyond EPOC categories and contextualise MFIs within wider systems thinking and implementation theory. This distinction between MFIs and single interventions can actually be very useful as it offers lessons about the contexts in which individual interventions might have bounded effectiveness (i.e. educational interventions for individual change). Taken as a whole, this may also then help in terms of how and when to conjoin single interventions into effective MFIs.

In the two previous reviews, a consistent finding was that MFIs were more effective than single interventions [ 8 , 9 ]. However, like Squires et al. [ 10 ] this overview is more equivocal on this important issue. There are four points which may help account for the differences in findings in this regard. Firstly, the diversity of the systematic reviews in terms of clinical topic or setting is an important factor. Secondly, there is heterogeneity of the studies within the included systematic reviews themselves. Thirdly, there is a lack of consistency with regards to the definition and strategies included within of MFIs. Finally, there are epistemological differences across the papers and the reviews. This means that the results that are presented depend on the methods used to measure, report, and synthesise them. For instance, some reviews highlight that education strategies can be useful to improve provider understanding — but without wider organisational or system-level change, they may struggle to deliver sustained transformation [ 19 , 44 ].

It is also worth highlighting the importance of the theory of change underlying the different interventions. Where authors of the systematic reviews draw on theory, there is space to discuss/explain findings. We note a distinction between theoretical and atheoretical systematic review discussion sections. Atheoretical reviews tend to present acontextual findings (for instance, one study found very positive results for one intervention, and this gets highlighted in the abstract) whilst theoretically informed reviews attempt to contextualise and explain patterns within the included studies. Theory-informed systematic reviews seem more likely to offer more profound and useful insights (see [ 19 , 35 , 40 , 43 , 45 ]). We find that the most insightful systematic reviews of MFIs engage in theoretical generalisation — they attempt to go beyond the data of individual studies and discuss the wider implications of the findings of the studies within their reviews drawing on implementation theory. At the same time, they highlight the active role of context and the wider relational and system-wide issues linked to implementation. It is these types of investigations that can help providers further develop evidence-based practice.

This overview has identified a small, but insightful set of papers that interrogate and help theorise why, how, for whom, and in which circumstances it might be the case that MFIs are superior (see [ 19 , 35 , 40 ] once more). At the level of this overview — and in most of the systematic reviews included — it appears to be the case that MFIs struggle with the question of attribution. In addition, there are other important elements that are often unmeasured, or unreported (e.g. costs of the intervention — see [ 40 ]). Finally, the stronger systematic reviews [ 19 , 35 , 40 , 43 , 45 ] engage with systems issues, human agency and context [ 18 ] in a way that was not evident in the systematic reviews identified in the previous reviews [ 8 , 9 ]. The earlier reviews lacked any theory of change that might explain why MFIs might be more effective than single ones — whereas now some systematic reviews do this, which enables them to conclude that sometimes single interventions can still be more effective.

As Nilsen et al. ([ 6 ] p. 7) note ‘Study findings concerning the effectiveness of various approaches are continuously synthesized and assembled in systematic reviews’. We may have gone as far as we can in understanding the implementation of evidence through systematic reviews of single and multi-faceted interventions and the next step would be to conduct more research exploring the complex and situated nature of evidence used in clinical practice and by particular professional groups. This would further build on the nuanced discussion and conclusion sections in a subset of the papers we reviewed. This might also support the field to move away from isolating individual implementation strategies [ 6 ] to explore the complex processes involving a range of actors with differing capacities [ 51 ] working in diverse organisational cultures. Taxonomies of implementation strategies do not fully account for the complex process of implementation, which involves a range of different actors with different capacities and skills across multiple system levels. There is plenty of work to build on, particularly in the social sciences, which currently sits at the margins of debates about evidence implementation (see for example, Normalisation Process Theory [ 52 ]).

There are several changes that we have identified in this overview of systematic reviews in comparison to the review we published in 2011 [ 8 ]. A consistent and welcome finding is that the overall quality of the systematic reviews themselves appears to have improved between the two reviews, although this is not reflected upon in the papers. This is exhibited through better, clearer reporting mechanisms in relation to the mechanics of the reviews, alongside a greater attention to, and deeper description of, how potential biases in included papers are discussed. Additionally, there is an increased, but still limited, inclusion of original studies conducted in low- and middle-income countries as opposed to just high-income countries. Importantly, we found that many of these systematic reviews are attuned to, and comment upon the contextual distinctions of pursuing evidence-informed interventions in health care settings in different economic settings. Furthermore, systematic reviews included in this updated article cover a wider set of clinical specialities (both within and beyond hospital settings) and have a focus on a wider set of healthcare professions — discussing both similarities, differences and inter-professional challenges faced therein, compared to the earlier reviews. These wider ranges of studies highlight that a particular intervention or group of interventions may work well for one professional group but be ineffective for another. This diversity of study settings allows us to consider the important role context (in its many forms) plays on implementing evidence into practice. Examining the complex and varied context of health care will help us address what Nilsen et al. ([ 6 ] p. 1) described as, ‘society’s health problems [that] require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies’. This will help us shift implementation science to move, ‘beyond a success or failure perspective towards improved analysis of variables that could explain the impact of the implementation process’ ([ 6 ] p. 2).

This review brings together 32 papers considering individual and multi-faceted interventions designed to support the use of evidence in clinical practice. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been conducted. As a whole, this substantial body of knowledge struggles to tell us more about the use of individual and MFIs than: ‘it depends’. To really move forwards in addressing the gap between research evidence and practice, we may need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of perspectives, especially from the social, economic, political and behavioural sciences in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed. Harvey et al. [ 53 ] suggest that when context is likely to be critical to implementation success there are a range of primary research approaches (participatory research, realist evaluation, developmental evaluation, ethnography, quality/ rapid cycle improvement) that are likely to be appropriate and insightful. While these approaches often form part of implementation studies in the form of process evaluations, they are usually relatively small scale in relation to implementation research as a whole. As a result, the findings often do not make it into the subsequent systematic reviews. This review provides further evidence that we need to bring qualitative approaches in from the periphery to play a central role in many implementation studies and subsequent evidence syntheses. It would be helpful for systematic reviews, at the very least, to include more detail about the interventions and their implementation in terms of how and why they worked.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Before and after study

Controlled clinical trial

Effective Practice and Organisation of Care

High-income countries

Information and Communications Technology

Interrupted time series

Knowledge translation

Low- and middle-income countries

Randomised controlled trial

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225–30. https://doi.org/10.1016/S0140-6736(03)14546-1 .

Article   PubMed   Google Scholar  

Green LA, Seifert CM. Translation of research into practice: why we can’t “just do it.” J Am Board Fam Pract. 2005;18:541–5. https://doi.org/10.3122/jabfm.18.6.541 .

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1–3. https://doi.org/10.1186/1748-5908-1-1 .

Article   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:2–14. https://doi.org/10.1186/s13012-015-0209-1 .

Article   Google Scholar  

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:1–8. https://doi.org/10.1186/s13012-015-0295-0 .

Nilsen P, Ståhl C, Roback K, et al. Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Sci. 2013;8:2–12. https://doi.org/10.1186/1748-5908-8-63 .

Rycroft-Malone J, Seers K, Eldh AC, et al. A realist process evaluation within the Facilitating Implementation of Research Evidence (FIRE) cluster randomised controlled international trial: an exemplar. Implementation Sci. 2018;13:1–15. https://doi.org/10.1186/s13012-018-0811-0 .

Boaz A, Baeza J, Fraser A, European Implementation Score Collaborative Group (EIS). Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes. 2011;4:212. https://doi.org/10.1186/1756-0500-4-212 .

Article   PubMed   PubMed Central   Google Scholar  

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, et al. Changing provider behavior – an overview of systematic reviews of interventions. Med Care. 2001;39 8Suppl 2:II2–45.

Google Scholar  

Squires JE, Sullivan K, Eccles MP, et al. Are multifaceted interventions more effective than single-component interventions in changing health-care professionals’ behaviours? An overview of systematic reviews. Implement Sci. 2014;9:1–22. https://doi.org/10.1186/s13012-014-0152-6 .

Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Development of an efficient search filter to retrieve systematic reviews from PubMed. J Med Libr Assoc. 2021;109:561–74. https://doi.org/10.5195/jmla.2021.1223 .

Thomas JM. Diffusion of innovation in systematic review methodology: why is study selection not yet assisted by automation? OA Evid Based Med. 2013;1:1–6.

Effective Practice and Organisation of Care (EPOC). The EPOC taxonomy of health systems interventions. EPOC Resources for review authors. Oslo: Norwegian Knowledge Centre for the Health Services; 2016. epoc.cochrane.org/epoc-taxonomy . Accessed 9 Oct 2023.

Jamal A, McKenzie K, Clark M. The impact of health information technology on the quality of medical and health care: a systematic review. Health Inf Manag. 2009;38:26–37. https://doi.org/10.1177/183335830903800305 .

Menon A, Korner-Bitensky N, Kastner M, et al. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009;41:1024–32. https://doi.org/10.2340/16501977-0451 .

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44:1271–8. https://doi.org/10.1016/0895-4356(91)90160-b .

Article   CAS   PubMed   Google Scholar  

Francke AL, Smit MC, de Veer AJ, et al. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Med Inform Decis Mak. 2008;8:1–11. https://doi.org/10.1186/1472-6947-8-38 .

Jones CA, Roop SC, Pohar SL, et al. Translating knowledge in rehabilitation: systematic review. Phys Ther. 2015;95:663–77. https://doi.org/10.2522/ptj.20130512 .

Scott D, Albrecht L, O’Leary K, Ball GDC, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci. 2012;7:1–17. https://doi.org/10.1186/1748-5908-7-70 .

Wu Y, Brettle A, Zhou C, Ou J, et al. Do educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes? A systematic review. Nurse Educ Today. 2018;70:109–14. https://doi.org/10.1016/j.nedt.2018.08.026 .

Yost J, Ganann R, Thompson D, Aloweni F, et al. The effectiveness of knowledge translation interventions for promoting evidence-informed decision-making among nurses in tertiary care: a systematic review and meta-analysis. Implement Sci. 2015;10:1–15. https://doi.org/10.1186/s13012-015-0286-1 .

Grudniewicz A, Kealy R, Rodseth RN, Hamid J, et al. What is the effectiveness of printed educational materials on primary care physician knowledge, behaviour, and patient outcomes: a systematic review and meta-analyses. Implement Sci. 2015;10:2–12. https://doi.org/10.1186/s13012-015-0347-5 .

Koota E, Kääriäinen M, Melender HL. Educational interventions promoting evidence-based practice among emergency nurses: a systematic review. Int Emerg Nurs. 2018;41:51–8. https://doi.org/10.1016/j.ienj.2018.06.004 .

Flodgren G, O’Brien MA, Parmelli E, et al. Local opinion leaders: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD000125.pub5 .

Arditi C, Rège-Walther M, Durieux P, et al. Computer-generated reminders delivered on paper to healthcare professionals: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2017. https://doi.org/10.1002/14651858.CD001175.pub4 .

Pantoja T, Grimshaw JM, Colomer N, et al. Manually-generated reminders delivered on paper: effects on professional practice and patient outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD001174.pub4 .

De Angelis G, Davies B, King J, McEwan J, et al. Information and communication technologies for the dissemination of clinical practice guidelines to health professionals: a systematic review. JMIR Med Educ. 2016;2:e16. https://doi.org/10.2196/mededu.6288 .

Brown A, Barnes C, Byaruhanga J, McLaughlin M, et al. Effectiveness of technology-enabled knowledge translation strategies in improving the use of research in public health: systematic review. J Med Internet Res. 2020;22:e17274. https://doi.org/10.2196/17274 .

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35. https://doi.org/10.1016/j.ijnurstu.2017.10.013 .

Bhatt NR, Czarniecki SW, Borgmann H, et al. A systematic review of the use of social media for dissemination of clinical practice guidelines. Eur Urol Focus. 2021;7:1195–204. https://doi.org/10.1016/j.euf.2020.10.008 .

Yamada J, Shorkey A, Barwick M, Widger K, et al. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open. 2015;5:e006808. https://doi.org/10.1136/bmjopen-2014-006808 .

Afari-Asiedu S, Abdulai MA, Tostmann A, et al. Interventions to improve dispensing of antibiotics at the community level in low and middle income countries: a systematic review. J Glob Antimicrob Resist. 2022;29:259–74. https://doi.org/10.1016/j.jgar.2022.03.009 .

Boonacker CW, Hoes AW, Dikhoff MJ, Schilder AG, et al. Interventions in health care professionals to improve treatment in children with upper respiratory tract infections. Int J Pediatr Otorhinolaryngol. 2010;74:1113–21. https://doi.org/10.1016/j.ijporl.2010.07.008 .

Al Zoubi FM, Menon A, Mayo NE, et al. The effectiveness of interventions designed to increase the uptake of clinical practice guidelines and best practices among musculoskeletal professionals: a systematic review. BMC Health Serv Res. 2018;18:2–11. https://doi.org/10.1186/s12913-018-3253-0 .

Ariyo P, Zayed B, Riese V, Anton B, et al. Implementation strategies to reduce surgical site infections: a systematic review. Infect Control Hosp Epidemiol. 2019;3:287–300. https://doi.org/10.1017/ice.2018.355 .

Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:1–11. https://doi.org/10.1186/s13012-015-0306-1 .

Cahill LS, Carey LM, Lannin NA, et al. Implementation interventions to promote the uptake of evidence-based practices in stroke rehabilitation. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD012575.pub2 .

Pedersen ER, Rubenstein L, Kandrack R, Danz M, et al. Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression. Implement Sci. 2018;13:1–30. https://doi.org/10.1186/s13012-018-0788-8 .

Jenkins HJ, Hancock MJ, French SD, Maher CG, et al. Effectiveness of interventions designed to reduce the use of imaging for low-back pain: a systematic review. CMAJ. 2015;187:401–8. https://doi.org/10.1503/cmaj.141183 .

Bennett S, Laver K, MacAndrew M, Beattie E, et al. Implementation of evidence-based, non-pharmacological interventions addressing behavior and psychological symptoms of dementia: a systematic review focused on implementation strategies. Int Psychogeriatr. 2021;33:947–75. https://doi.org/10.1017/S1041610220001702 .

Noonan VK, Wolfe DL, Thorogood NP, et al. Knowledge translation and implementation in spinal cord injury: a systematic review. Spinal Cord. 2014;52:578–87. https://doi.org/10.1038/sc.2014.62 .

Albrecht L, Archibald M, Snelgrove-Clarke E, et al. Systematic review of knowledge translation strategies to promote research uptake in child health settings. J Pediatr Nurs. 2016;31:235–54. https://doi.org/10.1016/j.pedn.2015.12.002 .

Campbell A, Louie-Poon S, Slater L, et al. Knowledge translation strategies used by healthcare professionals in child health settings: an updated systematic review. J Pediatr Nurs. 2019;47:114–20. https://doi.org/10.1016/j.pedn.2019.04.026 .

Bird ML, Miller T, Connell LA, et al. Moving stroke rehabilitation evidence into practice: a systematic review of randomized controlled trials. Clin Rehabil. 2019;33:1586–95. https://doi.org/10.1177/0269215519847253 .

Goorts K, Dizon J, Milanese S. The effectiveness of implementation strategies for promoting evidence informed interventions in allied healthcare: a systematic review. BMC Health Serv Res. 2021;21:1–11. https://doi.org/10.1186/s12913-021-06190-0 .

Zadro JR, O’Keeffe M, Allison JL, Lembke KA, et al. Effectiveness of implementation strategies to improve adherence of physical therapist treatment choices to clinical practice guidelines for musculoskeletal conditions: systematic review. Phys Ther. 2020;100:1516–41. https://doi.org/10.1093/ptj/pzaa101 .

Van der Veer SN, Jager KJ, Nache AM, et al. Translating knowledge on best practice into improving quality of RRT care: a systematic review of implementation strategies. Kidney Int. 2011;80:1021–34. https://doi.org/10.1038/ki.2011.222 .

Pawson R, Greenhalgh T, Harvey G, et al. Realist review–a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10Suppl 1:21–34. https://doi.org/10.1258/1355819054308530 .

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implementation Sci. 2012;7:1–10. https://doi.org/10.1186/1748-5908-7-33 .

Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5:e008592. https://doi.org/10.1136/bmjopen-2015-008592 .

Metz A, Jensen T, Farley A, Boaz A, et al. Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implement Res Pract. 2022;3:1–11. https://doi.org/10.1177/26334895221105585 .

May CR, Finch TL, Cornford J, Exley C, et al. Integrating telecare for chronic disease management in the community: What needs to be done? BMC Health Serv Res. 2011;11:1–11. https://doi.org/10.1186/1472-6963-11-131 .

Harvey G, Rycroft-Malone J, Seers K, Wilson P, et al. Connecting the science and practice of implementation – applying the lens of context to inform study design in implementation research. Front Health Serv. 2023;3:1–15. https://doi.org/10.3389/frhs.2023.1162762 .

Download references

Acknowledgements

The authors would like to thank Professor Kathryn Oliver for her support in the planning the review, Professor Steve Hanney for reading and commenting on the final manuscript and the staff at LSHTM library for their support in planning and conducting the literature search.

This study was supported by LSHTM’s Research England QR strategic priorities funding allocation and the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. Grant number NIHR200152. The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health and Social Care or Research England.

Author information

Authors and affiliations.

Health and Social Care Workforce Research Unit, The Policy Institute, King’s College London, Virginia Woolf Building, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

King’s Business School, King’s College London, 30 Aldwych, London, WC2B 4BG, UK

Juan Baeza & Alec Fraser

Federal University of Santa Catarina (UFSC), Campus Universitário Reitor João Davi Ferreira Lima, Florianópolis, SC, 88.040-900, Brazil

Erik Persson

You can also search for this author in PubMed   Google Scholar

Contributions

AB led the conceptual development and structure of the manuscript. EP conducted the searches and data extraction. All authors contributed to screening and quality appraisal. EP and AF wrote the first draft of the methods section. AB, JB and AF performed result synthesis and contributed to the analyses. AB wrote the first draft of the manuscript and incorporated feedback and revisions from all other authors. All authors revised and approved the final manuscript.

Corresponding author

Correspondence to Annette Boaz .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix a., additional file 2: appendix b., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Boaz, A., Baeza, J., Fraser, A. et al. ‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice. Implementation Sci 19 , 15 (2024). https://doi.org/10.1186/s13012-024-01337-z

Download citation

Received : 01 November 2023

Accepted : 05 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01337-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Interventions
  • Clinical practice
  • Research evidence
  • Multi-faceted

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic literature review guide

IMAGES

  1. systematic literature review use cases

    systematic literature review guide

  2. (PDF) A guide to systematic literature reviews

    systematic literature review guide

  3. how to start writing a systematic review

    systematic literature review guide

  4. How to conduct a Systematic Literature Review

    systematic literature review guide

  5. [PDF] How to Write a Systematic Review : A Step-by-Step Guide

    systematic literature review guide

  6. Systematic Literature Review Methodology

    systematic literature review guide

VIDEO

  1. Systematic literature review

  2. Introduction to Systematic Literature Review by Dr. K. G. Priyashantha

  3. Workshop Systematic Literature Review (SLR) & Bibliometric Analysis

  4. SYSTEMATIC AND LITERATURE REVIEWS

  5. Writing Systematic Literature Review papers

  6. Systematic Literature Review, by Prof. Ranjit Singh, IIIT Allahabad

COMMENTS

  1. Guidance on Conducting a Systematic Literature Review

    In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature reviews in planning education and research. Introduction

  2. Steps of a Systematic Review

    Steps to conducting a systematic review Quick overview of the process: Steps and resources from the UMB HSHSL Guide. YouTube video (26 min) Another detailed guide on how to conduct and write a systematic review from RMIT University A roadmap for searching literature in PubMed from the VU Amsterdam Alexander, P. A. (2020).

  3. How-to conduct a systematic literature review: A quick guide for

    A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure . An SLR updates the reader with current literature about a subject .

  4. Systematic Review

    Revised on November 20, 2023. A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review

  5. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems.

  6. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems.

  7. A young researcher's guide to a systematic review

    Systematic reviews are regarded as the best source of research evidence. A systematic review is a rigorous review of existing literature that addresses a clearly formulated question. This article aims to guide you on the different kinds of systematic review, the standard procedures to be followed, and the best approach to conducting and writing a systematic review.

  8. Home

    A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias, provide reliable findings, and inform decision-making. ¹ ²

  9. Five steps to conducting a systematic review

    STEP 1: FRAMING THE QUESTION The research question may initially be stated as a query in free form but reviewers prefer to pose it in a structured and explicit way. The relations between various components of the question and the structure of the research design are shown in Figure 1 .

  10. How to write a systematic literature review [9 steps]

    1. Decide on your team 2. Formulate your question 3. Plan your research protocol 4. Search for the literature 5. Screen the literature 6. Assess the quality of the studies 7. Extract the data 8. Analyze the results 9. Interpret and present the results Registering your systematic literature review

  11. PDF How to write a systematic literature review: a guide for medical students

    This guide aims to serve as a practical introduction to: the rationale for conducting a systematic review of the literature how to search the literature qualitative and quantitative interpretation how to structure a systematic review manuscript Generating a hypothesis

  12. How to Perform a Systematic Literature Review: A Guide for Healthcare

    How to Perform a Systematic Literature Review A Guide for Healthcare Researchers, Practitioners and Students Home Textbook Authors: Edward Purssell, Niall McCrae Presents a logical approach to systematic literature reviewing offers a corrective to flawed guidance in existing books

  13. Research Guides: Systematic Reviews: Types of Literature Reviews

    Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints. Time-limited formal quality assessment. Typically narrative and tabular.

  14. Systematic Reviews and Meta Analysis

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  15. A guide to systematic literature reviews

    The first stage in conducting a systematic review is to develop a protocol that clearly defines: 1) the aims and objectives of the review; 2) the inclusion and exclusion criteria for studies; 3) the way in which studies will be identified; and 4) the plan of analysis.

  16. Home

    A Systematic Review is a very specialized and in-depth literature review. It is a research method in its own right, where information from research studies is aggregated, analyzed, and synthesized in an unbiased and reproducible manner to answer a research question. The SR can provide evidence for practice and policy-making as well as gaps in ...

  17. Guidance to best tools and practices for systematic reviews

    The gray literature and a search of trials may also reveal important details about topics that would otherwise ... Systematic review: ... A guide to systematic review and meta-analysis of prediction model performance. BMJ. 2017;356:i6460. 66. Moola S, Munn Z, Tufanaru C, Aromartaris E, Sears K, Sfetcu R, et al. Systematic reviews of etiology ...

  18. Overview

    A systematic review is a comprehensive literature search and synthesis project that tries to answer a well-defined question using existing primary research as evidence. A protocol is used to plan the systematic review methods prior to the project, including what is and is not included in the search. Systematic reviews are often used as the foundation for a meta analysis (a statistical process ...

  19. How-to conduct a systematic literature review: A quick guide for

    A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following: •

  20. How-to conduct a systematic literature review: A quick guide for

    How-to conduct a systematic literature review: A quick guide for computer science research MethodsX. 2022 Nov 4:9:101895. doi: 10.1016/j.mex.2022.101895. eCollection 2022. Authors Angela Carrera-Rivera 1 , William Ochoa 1 , Felix Larrinaga 1 , Ganix Lasa 2 Affiliations 1 Faculty of Engineering, Mondragon University.

  21. How to Undertake an Impactful Literature Review: Understanding Review

    The systematic literature review (SLR) is one of the important review methodologies which is increasingly becoming popular to synthesize literature in any discipline in general and management in particular. ... Okoli C., & Schabram K. (2010). A guide to conducting a systematic literature review of information systems research. Sprouts: Working ...

  22. Cochrane Handbook for Systematic Reviews of Interventions

    About the Handbook. The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.All authors should consult the Handbook for guidance on the methods used in Cochrane systematic reviews.The Handbook includes guidance on the standard ...

  23. How to conduct systematic literature reviews in management ...

    The application of systematic or structured literature reviews (SLRs) has developed into an established approach in the management domain (Kraus et al. 2020), with 90% of management-related SLRs published within the last 10 years (Clark et al. 2021).Such reviews help to condense knowledge in the field and point to future research directions, thereby enabling theory development (Fink 2010 ...

  24. Systematic Review & Evidence Synthesis

    Boland A, Cherry MG, Dickson R. Doing a systematic review: a student's guide. 2nd ed. SAGE; 2017. Foster M J, Jewell S T. Assembling the pieces of a systematic review: guide for librarians.Rowman & Littlefield; 2017. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies.

  25. (PDF) A guide to systematic literature reviews

    In this guide to systematic literature reviews, the methods of conducting systematic reviews are discussed in relation to minimizing bias, searching the literature and investigating...

  26. Systematic Literature Searching & Reviews

    The following definition outlines the scope of a systematic review: "A systematic review is a structured and pre-planned synthesis of original studies involving predefined research question, inclusion criteria, search methods, selection procedure, quality assessment, data extraction, and data analysis. No original study should deliberately be ...

  27. 'It depends': what 86 systematic reviews tell us about what strategies

    This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and ...

  28. Remote Sensing Applications in Almond Orchards: A Comprehensive ...

    Almond cultivation is of great socio-economic importance worldwide. With the demand for almonds steadily increasing due to their nutritional value and versatility, optimizing the management of almond orchards becomes crucial to promote sustainable agriculture and ensure food security. The present systematic literature review, conducted according to the PRISMA protocol, is devoted to the ...

  29. bookreader on Instagram: "Dm for buy How to Read a Book "How to Read

    11 likes, 0 comments - bookreader.store on February 18, 2024: "Dm for buy How to Read a Book "How to Read a Book" by Mortimer J. Adler and Charles Van Doren i..."