How to conduct a meta-analysis in eight steps: a practical guide

  • Open access
  • Published: 30 November 2021
  • Volume 72 , pages 1–19, ( 2022 )

Cite this article

You have full access to this open access article

  • Christopher Hansen 1 ,
  • Holger Steinmetz 2 &
  • Jörn Block 3 , 4 , 5  

130k Accesses

39 Citations

158 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

1 Introduction

“Scientists have known for centuries that a single study will not resolve a major issue. Indeed, a small sample study will not even resolve a minor issue. Thus, the foundation of science is the cumulation of knowledge from the results of many studies.” (Hunter et al. 1982 , p. 10)

Meta-analysis is a central method for knowledge accumulation in many scientific fields (Aguinis et al. 2011c ; Kepes et al. 2013 ). Similar to a narrative review, it serves as a synopsis of a research question or field. However, going beyond a narrative summary of key findings, a meta-analysis adds value in providing a quantitative assessment of the relationship between two target variables or the effectiveness of an intervention (Gurevitch et al. 2018 ). Also, it can be used to test competing theoretical assumptions against each other or to identify important moderators where the results of different primary studies differ from each other (Aguinis et al. 2011b ; Bergh et al. 2016 ). Rooted in the synthesis of the effectiveness of medical and psychological interventions in the 1970s (Glass 2015 ; Gurevitch et al. 2018 ), meta-analysis is nowadays also an established method in management research and related fields.

The increasing importance of meta-analysis in management research has resulted in the publication of guidelines in recent years that discuss the merits and best practices in various fields, such as general management (Bergh et al. 2016 ; Combs et al. 2019 ; Gonzalez-Mulé and Aguinis 2018 ), international business (Steel et al. 2021 ), economics and finance (Geyer-Klingeberg et al. 2020 ; Havranek et al. 2020 ), marketing (Eisend 2017 ; Grewal et al. 2018 ), and organizational studies (DeSimone et al. 2020 ; Rudolph et al. 2020 ). These articles discuss existing and trending methods and propose solutions for often experienced problems. This editorial briefly summarizes the insights of these papers; provides a workflow of the essential steps in conducting a meta-analysis; suggests state-of-the art methodological procedures; and points to other articles for in-depth investigation. Thus, this article has two goals: (1) based on the findings of previous editorials and methodological articles, it defines methodological recommendations for meta-analyses submitted to Management Review Quarterly (MRQ); and (2) it serves as a practical guide for researchers who have little experience with meta-analysis as a method but plan to conduct one in the future.

2 Eight steps in conducting a meta-analysis

2.1 step 1: defining the research question.

The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed. When defining the research question, two hurdles might develop. First, when defining an adequate study scope, researchers must consider that the number of publications has grown exponentially in many fields of research in recent decades (Fortunato et al. 2018 ). On the one hand, a larger number of studies increases the potentially relevant literature basis and enables researchers to conduct meta-analyses. Conversely, scanning a large amount of studies that could be potentially relevant for the meta-analysis results in a perhaps unmanageable workload. Thus, Steel et al. ( 2021 ) highlight the importance of balancing manageability and relevance when defining the research question. Second, similar to the number of primary studies also the number of meta-analyses in management research has grown strongly in recent years (Geyer-Klingeberg et al. 2020 ; Rauch 2020 ; Schwab 2015 ). Therefore, it is likely that one or several meta-analyses for many topics of high scholarly interest already exist. However, this should not deter researchers from investigating their research questions. One possibility is to consider moderators or mediators of a relationship that have previously been ignored. For example, a meta-analysis about startup performance could investigate the impact of different ways to measure the performance construct (e.g., growth vs. profitability vs. survival time) or certain characteristics of the founders as moderators. Another possibility is to replicate previous meta-analyses and test whether their findings can be confirmed with an updated sample of primary studies or newly developed methods. Frequent replications and updates of meta-analyses are important contributions to cumulative science and are increasingly called for by the research community (Anderson & Kichkha 2017 ; Steel et al. 2021 ). Consistent with its focus on replication studies (Block and Kuckertz 2018 ), MRQ therefore also invites authors to submit replication meta-analyses.

2.2 Step 2: literature search

2.2.1 search strategies.

Similar to conducting a literature review, the search process of a meta-analysis should be systematic, reproducible, and transparent, resulting in a sample that includes all relevant studies (Fisch and Block 2018 ; Gusenbauer and Haddaway 2020 ). There are several identification strategies for relevant primary studies when compiling meta-analytical datasets (Harari et al. 2020 ). First, previous meta-analyses on the same or a related topic may provide lists of included studies that offer a good starting point to identify and become familiar with the relevant literature. This practice is also applicable to topic-related literature reviews, which often summarize the central findings of the reviewed articles in systematic tables. Both article types likely include the most prominent studies of a research field. The most common and important search strategy, however, is a keyword search in electronic databases (Harari et al. 2020 ). This strategy will probably yield the largest number of relevant studies, particularly so-called ‘grey literature’, which may not be considered by literature reviews. Gusenbauer and Haddaway ( 2020 ) provide a detailed overview of 34 scientific databases, of which 18 are multidisciplinary or have a focus on management sciences, along with their suitability for literature synthesis. To prevent biased results due to the scope or journal coverage of one database, researchers should use at least two different databases (DeSimone et al. 2020 ; Martín-Martín et al. 2021 ; Mongeon & Paul-Hus 2016 ). However, a database search can easily lead to an overload of potentially relevant studies. For example, key term searches in Google Scholar for “entrepreneurial intention” and “firm diversification” resulted in more than 660,000 and 810,000 hits, respectively. Footnote 1 Therefore, a precise research question and precise search terms using Boolean operators are advisable (Gusenbauer and Haddaway 2020 ). Addressing the challenge of identifying relevant articles in the growing number of database publications, (semi)automated approaches using text mining and machine learning (Bosco et al. 2017 ; O’Mara-Eves et al. 2015 ; Ouzzani et al. 2016 ; Thomas et al. 2017 ) can also be promising and time-saving search tools in the future. Also, some electronic databases offer the possibility to track forward citations of influential studies and thereby identify further relevant articles. Finally, collecting unpublished or undetected studies through conferences, personal contact with (leading) scholars, or listservs can be strategies to increase the study sample size (Grewal et al. 2018 ; Harari et al. 2020 ; Pigott and Polanin 2020 ).

2.2.2 Study inclusion criteria and sample composition

Next, researchers must decide which studies to include in the meta-analysis. Some guidelines for literature reviews recommend limiting the sample to studies published in renowned academic journals to ensure the quality of findings (e.g., Kraus et al. 2020 ). For meta-analysis, however, Steel et al. ( 2021 ) advocate for the inclusion of all available studies, including grey literature, to prevent selection biases based on availability, cost, familiarity, and language (Rothstein et al. 2005 ), or the “Matthew effect”, which denotes the phenomenon that highly cited articles are found faster than less cited articles (Merton 1968 ). Harrison et al. ( 2017 ) find that the effects of published studies in management are inflated on average by 30% compared to unpublished studies. This so-called publication bias or “file drawer problem” (Rosenthal 1979 ) results from the preference of academia to publish more statistically significant and less statistically insignificant study results. Owen and Li ( 2020 ) showed that publication bias is particularly severe when variables of interest are used as key variables rather than control variables. To consider the true effect size of a target variable or relationship, the inclusion of all types of research outputs is therefore recommended (Polanin et al. 2016 ). Different test procedures to identify publication bias are discussed subsequently in Step 7.

In addition to the decision of whether to include certain study types (i.e., published vs. unpublished studies), there can be other reasons to exclude studies that are identified in the search process. These reasons can be manifold and are primarily related to the specific research question and methodological peculiarities. For example, studies identified by keyword search might not qualify thematically after all, may use unsuitable variable measurements, or may not report usable effect sizes. Furthermore, there might be multiple studies by the same authors using similar datasets. If they do not differ sufficiently in terms of their sample characteristics or variables used, only one of these studies should be included to prevent bias from duplicates (Wood 2008 ; see this article for a detection heuristic).

In general, the screening process should be conducted stepwise, beginning with a removal of duplicate citations from different databases, followed by abstract screening to exclude clearly unsuitable studies and a final full-text screening of the remaining articles (Pigott and Polanin 2020 ). A graphical tool to systematically document the sample selection process is the PRISMA flow diagram (Moher et al. 2009 ). Page et al. ( 2021 ) recently presented an updated version of the PRISMA statement, including an extended item checklist and flow diagram to report the study process and findings.

2.3 Step 3: choice of the effect size measure

2.3.1 types of effect sizes.

The two most common meta-analytical effect size measures in management studies are (z-transformed) correlation coefficients and standardized mean differences (Aguinis et al. 2011a ; Geyskens et al. 2009 ). However, meta-analyses in management science and related fields may not be limited to those two effect size measures but rather depend on the subfield of investigation (Borenstein 2009 ; Stanley and Doucouliagos 2012 ). In economics and finance, researchers are more interested in the examination of elasticities and marginal effects extracted from regression models than in pure bivariate correlations (Stanley and Doucouliagos 2012 ). Regression coefficients can also be converted to partial correlation coefficients based on their t-statistics to make regression results comparable across studies (Stanley and Doucouliagos 2012 ). Although some meta-analyses in management research have combined bivariate and partial correlations in their study samples, Aloe ( 2015 ) and Combs et al. ( 2019 ) advise researchers not to use this practice. Most importantly, they argue that the effect size strength of partial correlations depends on the other variables included in the regression model and is therefore incomparable to bivariate correlations (Schmidt and Hunter 2015 ), resulting in a possible bias of the meta-analytic results (Roth et al. 2018 ). We endorse this opinion. If at all, we recommend separate analyses for each measure. In addition to these measures, survival rates, risk ratios or odds ratios, which are common measures in medical research (Borenstein 2009 ), can be suitable effect sizes for specific management research questions, such as understanding the determinants of the survival of startup companies. To summarize, the choice of a suitable effect size is often taken away from the researcher because it is typically dependent on the investigated research question as well as the conventions of the specific research field (Cheung and Vijayakumar 2016 ).

2.3.2 Conversion of effect sizes to a common measure

After having defined the primary effect size measure for the meta-analysis, it might become necessary in the later coding process to convert study findings that are reported in effect sizes that are different from the chosen primary effect size. For example, a study might report only descriptive statistics for two study groups but no correlation coefficient, which is used as the primary effect size measure in the meta-analysis. Different effect size measures can be harmonized using conversion formulae, which are provided by standard method books such as Borenstein et al. ( 2009 ) or Lipsey and Wilson ( 2001 ). There also exist online effect size calculators for meta-analysis. Footnote 2

2.4 Step 4: choice of the analytical method used

Choosing which meta-analytical method to use is directly connected to the research question of the meta-analysis. Research questions in meta-analyses can address a relationship between constructs or an effect of an intervention in a general manner, or they can focus on moderating or mediating effects. There are four meta-analytical methods that are primarily used in contemporary management research (Combs et al. 2019 ; Geyer-Klingeberg et al. 2020 ), which allow the investigation of these different types of research questions: traditional univariate meta-analysis, meta-regression, meta-analytic structural equation modeling, and qualitative meta-analysis (Hoon 2013 ). While the first three are quantitative, the latter summarizes qualitative findings. Table 1 summarizes the key characteristics of the three quantitative methods.

2.4.1 Univariate meta-analysis

In its traditional form, a meta-analysis reports a weighted mean effect size for the relationship or intervention of investigation and provides information on the magnitude of variance among primary studies (Aguinis et al. 2011c ; Borenstein et al. 2009 ). Accordingly, it serves as a quantitative synthesis of a research field (Borenstein et al. 2009 ; Geyskens et al. 2009 ). Prominent traditional approaches have been developed, for example, by Hedges and Olkin ( 1985 ) or Hunter and Schmidt ( 1990 , 2004 ). However, going beyond its simple summary function, the traditional approach has limitations in explaining the observed variance among findings (Gonzalez-Mulé and Aguinis 2018 ). To identify moderators (or boundary conditions) of the relationship of interest, meta-analysts can create subgroups and investigate differences between those groups (Borenstein and Higgins 2013 ; Hunter and Schmidt 2004 ). Potential moderators can be study characteristics (e.g., whether a study is published vs. unpublished), sample characteristics (e.g., study country, industry focus, or type of survey/experiment participants), or measurement artifacts (e.g., different types of variable measurements). The univariate approach is thus suitable to identify the overall direction of a relationship and can serve as a good starting point for additional analyses. However, due to its limitations in examining boundary conditions and developing theory, the univariate approach on its own is currently oftentimes viewed as not sufficient (Rauch 2020 ; Shaw and Ertug 2017 ).

2.4.2 Meta-regression analysis

Meta-regression analysis (Hedges and Olkin 1985 ; Lipsey and Wilson 2001 ; Stanley and Jarrell 1989 ) aims to investigate the heterogeneity among observed effect sizes by testing multiple potential moderators simultaneously. In meta-regression, the coded effect size is used as the dependent variable and is regressed on a list of moderator variables. These moderator variables can be categorical variables as described previously in the traditional univariate approach or (semi)continuous variables such as country scores that are merged with the meta-analytical data. Thus, meta-regression analysis overcomes the disadvantages of the traditional approach, which only allows us to investigate moderators singularly using dichotomized subgroups (Combs et al. 2019 ; Gonzalez-Mulé and Aguinis 2018 ). These possibilities allow a more fine-grained analysis of research questions that are related to moderating effects. However, Schmidt ( 2017 ) critically notes that the number of effect sizes in the meta-analytical sample must be sufficiently large to produce reliable results when investigating multiple moderators simultaneously in a meta-regression. For further reading, Tipton et al. ( 2019 ) outline the technical, conceptual, and practical developments of meta-regression over the last decades. Gonzalez-Mulé and Aguinis ( 2018 ) provide an overview of methodological choices and develop evidence-based best practices for future meta-analyses in management using meta-regression.

2.4.3 Meta-analytic structural equation modeling (MASEM)

MASEM is a combination of meta-analysis and structural equation modeling and allows to simultaneously investigate the relationships among several constructs in a path model. Researchers can use MASEM to test several competing theoretical models against each other or to identify mediation mechanisms in a chain of relationships (Bergh et al. 2016 ). This method is typically performed in two steps (Cheung and Chan 2005 ): In Step 1, a pooled correlation matrix is derived, which includes the meta-analytical mean effect sizes for all variable combinations; Step 2 then uses this matrix to fit the path model. While MASEM was based primarily on traditional univariate meta-analysis to derive the pooled correlation matrix in its early years (Viswesvaran and Ones 1995 ), more advanced methods, such as the GLS approach (Becker 1992 , 1995 ) or the TSSEM approach (Cheung and Chan 2005 ), have been subsequently developed. Cheung ( 2015a ) and Jak ( 2015 ) provide an overview of these approaches in their books with exemplary code. For datasets with more complex data structures, Wilson et al. ( 2016 ) also developed a multilevel approach that is related to the TSSEM approach in the second step. Bergh et al. ( 2016 ) discuss nine decision points and develop best practices for MASEM studies.

2.4.4 Qualitative meta-analysis

While the approaches explained above focus on quantitative outcomes of empirical studies, qualitative meta-analysis aims to synthesize qualitative findings from case studies (Hoon 2013 ; Rauch et al. 2014 ). The distinctive feature of qualitative case studies is their potential to provide in-depth information about specific contextual factors or to shed light on reasons for certain phenomena that cannot usually be investigated by quantitative studies (Rauch 2020 ; Rauch et al. 2014 ). In a qualitative meta-analysis, the identified case studies are systematically coded in a meta-synthesis protocol, which is then used to identify influential variables or patterns and to derive a meta-causal network (Hoon 2013 ). Thus, the insights of contextualized and typically nongeneralizable single studies are aggregated to a larger, more generalizable picture (Habersang et al. 2019 ). Although still the exception, this method can thus provide important contributions for academics in terms of theory development (Combs et al., 2019 ; Hoon 2013 ) and for practitioners in terms of evidence-based management or entrepreneurship (Rauch et al. 2014 ). Levitt ( 2018 ) provides a guide and discusses conceptual issues for conducting qualitative meta-analysis in psychology, which is also useful for management researchers.

2.5 Step 5: choice of software

Software solutions to perform meta-analyses range from built-in functions or additional packages of statistical software to software purely focused on meta-analyses and from commercial to open-source solutions. However, in addition to personal preferences, the choice of the most suitable software depends on the complexity of the methods used and the dataset itself (Cheung and Vijayakumar 2016 ). Meta-analysts therefore must carefully check if their preferred software is capable of performing the intended analysis.

Among commercial software providers, Stata (from version 16 on) offers built-in functions to perform various meta-analytical analyses or to produce various plots (Palmer and Sterne 2016 ). For SPSS and SAS, there exist several macros for meta-analyses provided by scholars, such as David B. Wilson or Andy P. Field and Raphael Gillet (Field and Gillett 2010 ). Footnote 3 Footnote 4 For researchers using the open-source software R (R Core Team 2021 ), Polanin et al. ( 2017 ) provide an overview of 63 meta-analysis packages and their functionalities. For new users, they recommend the package metafor (Viechtbauer 2010 ), which includes most necessary functions and for which the author Wolfgang Viechtbauer provides tutorials on his project website. Footnote 5 Footnote 6 In addition to packages and macros for statistical software, templates for Microsoft Excel have also been developed to conduct simple meta-analyses, such as Meta-Essentials by Suurmond et al. ( 2017 ). Footnote 7 Finally, programs purely dedicated to meta-analysis also exist, such as Comprehensive Meta-Analysis (Borenstein et al. 2013 ) or RevMan by The Cochrane Collaboration ( 2020 ).

2.6 Step 6: coding of effect sizes

2.6.1 coding sheet.

The first step in the coding process is the design of the coding sheet. A universal template does not exist because the design of the coding sheet depends on the methods used, the respective software, and the complexity of the research design. For univariate meta-analysis or meta-regression, data are typically coded in wide format. In its simplest form, when investigating a correlational relationship between two variables using the univariate approach, the coding sheet would contain a column for the study name or identifier, the effect size coded from the primary study, and the study sample size. However, such simple relationships are unlikely in management research because the included studies are typically not identical but differ in several respects. With more complex data structures or moderator variables being investigated, additional columns are added to the coding sheet to reflect the data characteristics. These variables can be coded as dummy, factor, or (semi)continuous variables and later used to perform a subgroup analysis or meta regression. For MASEM, the required data input format can deviate depending on the method used (e.g., TSSEM requires a list of correlation matrices as data input). For qualitative meta-analysis, the coding scheme typically summarizes the key qualitative findings and important contextual and conceptual information (see Hoon ( 2013 ) for a coding scheme for qualitative meta-analysis). Figure  1 shows an exemplary coding scheme for a quantitative meta-analysis on the correlational relationship between top-management team diversity and profitability. In addition to effect and sample sizes, information about the study country, firm type, and variable operationalizations are coded. The list could be extended by further study and sample characteristics.

figure 1

Exemplary coding sheet for a meta-analysis on the relationship (correlation) between top-management team diversity and profitability

2.6.2 Inclusion of moderator or control variables

It is generally important to consider the intended research model and relevant nontarget variables before coding a meta-analytic dataset. For example, study characteristics can be important moderators or function as control variables in a meta-regression model. Similarly, control variables may be relevant in a MASEM approach to reduce confounding bias. Coding additional variables or constructs subsequently can be arduous if the sample of primary studies is large. However, the decision to include respective moderator or control variables, as in any empirical analysis, should always be based on strong (theoretical) rationales about how these variables can impact the investigated effect (Bernerth and Aguinis 2016 ; Bernerth et al. 2018 ; Thompson and Higgins 2002 ). While substantive moderators refer to theoretical constructs that act as buffers or enhancers of a supposed causal process, methodological moderators are features of the respective research designs that denote the methodological context of the observations and are important to control for systematic statistical particularities (Rudolph et al. 2020 ). Havranek et al. ( 2020 ) provide a list of recommended variables to code as potential moderators. While researchers may have clear expectations about the effects for some of these moderators, the concerns for other moderators may be tentative, and moderator analysis may be approached in a rather exploratory fashion. Thus, we argue that researchers should make full use of the meta-analytical design to obtain insights about potential context dependence that a primary study cannot achieve.

2.6.3 Treatment of multiple effect sizes in a study

A long-debated issue in conducting meta-analyses is whether to use only one or all available effect sizes for the same construct within a single primary study. For meta-analyses in management research, this question is fundamental because many empirical studies, particularly those relying on company databases, use multiple variables for the same construct to perform sensitivity analyses, resulting in multiple relevant effect sizes. In this case, researchers can either (randomly) select a single value, calculate a study average, or use the complete set of effect sizes (Bijmolt and Pieters 2001 ; López-López et al. 2018 ). Multiple effect sizes from the same study enrich the meta-analytic dataset and allow us to investigate the heterogeneity of the relationship of interest, such as different variable operationalizations (López-López et al. 2018 ; Moeyaert et al. 2017 ). However, including more than one effect size from the same study violates the independency assumption of observations (Cheung 2019 ; López-López et al. 2018 ), which can lead to biased results and erroneous conclusions (Gooty et al. 2021 ). We follow the recommendation of current best practice guides to take advantage of using all available effect size observations but to carefully consider interdependencies using appropriate methods such as multilevel models, panel regression models, or robust variance estimation (Cheung 2019 ; Geyer-Klingeberg et al. 2020 ; Gooty et al. 2021 ; López-López et al. 2018 ; Moeyaert et al. 2017 ).

2.7 Step 7: analysis

2.7.1 outlier analysis and tests for publication bias.

Before conducting the primary analysis, some preliminary sensitivity analyses might be necessary, which should ensure the robustness of the meta-analytical findings (Rudolph et al. 2020 ). First, influential outlier observations could potentially bias the observed results, particularly if the number of total effect sizes is small. Several statistical methods can be used to identify outliers in meta-analytical datasets (Aguinis et al. 2013 ; Viechtbauer and Cheung 2010 ). However, there is a debate about whether to keep or omit these observations. Anyhow, relevant studies should be closely inspected to infer an explanation about their deviating results. As in any other primary study, outliers can be a valid representation, albeit representing a different population, measure, construct, design or procedure. Thus, inferences about outliers can provide the basis to infer potential moderators (Aguinis et al. 2013 ; Steel et al. 2021 ). On the other hand, outliers can indicate invalid research, for instance, when unrealistically strong correlations are due to construct overlap (i.e., lack of a clear demarcation between independent and dependent variables), invalid measures, or simply typing errors when coding effect sizes. An advisable step is therefore to compare the results both with and without outliers and base the decision on whether to exclude outlier observations with careful consideration (Geyskens et al. 2009 ; Grewal et al. 2018 ; Kepes et al. 2013 ). However, instead of simply focusing on the size of the outlier, its leverage should be considered. Thus, Viechtbauer and Cheung ( 2010 ) propose considering a combination of standardized deviation and a study’s leverage.

Second, as mentioned in the context of a literature search, potential publication bias may be an issue. Publication bias can be examined in multiple ways (Rothstein et al. 2005 ). First, the funnel plot is a simple graphical tool that can provide an overview of the effect size distribution and help to detect publication bias (Stanley and Doucouliagos 2010 ). A funnel plot can also support in identifying potential outliers. As mentioned above, a graphical display of deviation (e.g., studentized residuals) and leverage (Cook’s distance) can help detect the presence of outliers and evaluate their influence (Viechtbauer and Cheung 2010 ). Moreover, several statistical procedures can be used to test for publication bias (Harrison et al. 2017 ; Kepes et al. 2012 ), including subgroup comparisons between published and unpublished studies, Begg and Mazumdar’s ( 1994 ) rank correlation test, cumulative meta-analysis (Borenstein et al. 2009 ), the trim and fill method (Duval and Tweedie 2000a , b ), Egger et al.’s ( 1997 ) regression test, failsafe N (Rosenthal 1979 ), or selection models (Hedges and Vevea 2005 ; Vevea and Woods 2005 ). In examining potential publication bias, Kepes et al. ( 2012 ) and Harrison et al. ( 2017 ) both recommend not relying only on a single test but rather using multiple conceptionally different test procedures (i.e., the so-called “triangulation approach”).

2.7.2 Model choice

After controlling and correcting for the potential presence of impactful outliers or publication bias, the next step in meta-analysis is the primary analysis, where meta-analysts must decide between two different types of models that are based on different assumptions: fixed-effects and random-effects (Borenstein et al. 2010 ). Fixed-effects models assume that all observations share a common mean effect size, which means that differences are only due to sampling error, while random-effects models assume heterogeneity and allow for a variation of the true effect sizes across studies (Borenstein et al. 2010 ; Cheung and Vijayakumar 2016 ; Hunter and Schmidt 2004 ). Both models are explained in detail in standard textbooks (e.g., Borenstein et al. 2009 ; Hunter and Schmidt 2004 ; Lipsey and Wilson 2001 ).

In general, the presence of heterogeneity is likely in management meta-analyses because most studies do not have identical empirical settings, which can yield different effect size strengths or directions for the same investigated phenomenon. For example, the identified studies have been conducted in different countries with different institutional settings, or the type of study participants varies (e.g., students vs. employees, blue-collar vs. white-collar workers, or manufacturing vs. service firms). Thus, the vast majority of meta-analyses in management research and related fields use random-effects models (Aguinis et al. 2011a ). In a meta-regression, the random-effects model turns into a so-called mixed-effects model because moderator variables are added as fixed effects to explain the impact of observed study characteristics on effect size variations (Raudenbush 2009 ).

2.8 Step 8: reporting results

2.8.1 reporting in the article.

The final step in performing a meta-analysis is reporting its results. Most importantly, all steps and methodological decisions should be comprehensible to the reader. DeSimone et al. ( 2020 ) provide an extensive checklist for journal reviewers of meta-analytical studies. This checklist can also be used by authors when performing their analyses and reporting their results to ensure that all important aspects have been addressed. Alternative checklists are provided, for example, by Appelbaum et al. ( 2018 ) or Page et al. ( 2021 ). Similarly, Levitt et al. ( 2018 ) provide a detailed guide for qualitative meta-analysis reporting standards.

For quantitative meta-analyses, tables reporting results should include all important information and test statistics, including mean effect sizes; standard errors and confidence intervals; the number of observations and study samples included; and heterogeneity measures. If the meta-analytic sample is rather small, a forest plot provides a good overview of the different findings and their accuracy. However, this figure will be less feasible for meta-analyses with several hundred effect sizes included. Also, results displayed in the tables and figures must be explained verbally in the results and discussion sections. Most importantly, authors must answer the primary research question, i.e., whether there is a positive, negative, or no relationship between the variables of interest, or whether the examined intervention has a certain effect. These results should be interpreted with regard to their magnitude (or significance), both economically and statistically. However, when discussing meta-analytical results, authors must describe the complexity of the results, including the identified heterogeneity and important moderators, future research directions, and theoretical relevance (DeSimone et al. 2019 ). In particular, the discussion of identified heterogeneity and underlying moderator effects is critical; not including this information can lead to false conclusions among readers, who interpret the reported mean effect size as universal for all included primary studies and ignore the variability of findings when citing the meta-analytic results in their research (Aytug et al. 2012 ; DeSimone et al. 2019 ).

2.8.2 Open-science practices

Another increasingly important topic is the public provision of meta-analytical datasets and statistical codes via open-source repositories. Open-science practices allow for results validation and for the use of coded data in subsequent meta-analyses ( Polanin et al. 2020 ), contributing to the development of cumulative science. Steel et al. ( 2021 ) refer to open science meta-analyses as a step towards “living systematic reviews” (Elliott et al. 2017 ) with continuous updates in real time. MRQ supports this development and encourages authors to make their datasets publicly available. Moreau and Gamble ( 2020 ), for example, provide various templates and video tutorials to conduct open science meta-analyses. There exist several open science repositories, such as the Open Science Foundation (OSF; for a tutorial, see Soderberg 2018 ), to preregister and make documents publicly available. Furthermore, several initiatives in the social sciences have been established to develop dynamic meta-analyses, such as metaBUS (Bosco et al. 2015 , 2017 ), MetaLab (Bergmann et al. 2018 ), or PsychOpen CAMA (Burgard et al. 2021 ).

3 Conclusion

This editorial provides a comprehensive overview of the essential steps in conducting and reporting a meta-analysis with references to more in-depth methodological articles. It also serves as a guide for meta-analyses submitted to MRQ and other management journals. MRQ welcomes all types of meta-analyses from all subfields and disciplines of management research.

Gusenbauer and Haddaway ( 2020 ), however, point out that Google Scholar is not appropriate as a primary search engine due to a lack of reproducibility of search results.

One effect size calculator by David B. Wilson is accessible via: https://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-Home.php .

The macros of David B. Wilson can be downloaded from: http://mason.gmu.edu/~dwilsonb/ .

The macros of Field and Gillet ( 2010 ) can be downloaded from: https://www.discoveringstatistics.com/repository/fieldgillett/how_to_do_a_meta_analysis.html .

The tutorials can be found via: https://www.metafor-project.org/doku.php .

Metafor does currently not provide functions to conduct MASEM. For MASEM, users can, for instance, use the package metaSEM (Cheung 2015b ).

The workbooks can be downloaded from: https://www.erim.eur.nl/research-support/meta-essentials/ .

Aguinis H, Dalton DR, Bosco FA, Pierce CA, Dalton CM (2011a) Meta-analytic choices and judgment calls: Implications for theory building and testing, obtained effect sizes, and scholarly impact. J Manag 37(1):5–38

Google Scholar  

Aguinis H, Gottfredson RK, Joo H (2013) Best-practice recommendations for defining, identifying, and handling outliers. Organ Res Methods 16(2):270–301

Article   Google Scholar  

Aguinis H, Gottfredson RK, Wright TA (2011b) Best-practice recommendations for estimating interaction effects using meta-analysis. J Organ Behav 32(8):1033–1043

Aguinis H, Pierce CA, Bosco FA, Dalton DR, Dalton CM (2011c) Debunking myths and urban legends about meta-analysis. Organ Res Methods 14(2):306–331

Aloe AM (2015) Inaccuracy of regression results in replacing bivariate correlations. Res Synth Methods 6(1):21–27

Anderson RG, Kichkha A (2017) Replication, meta-analysis, and research synthesis in economics. Am Econ Rev 107(5):56–59

Appelbaum M, Cooper H, Kline RB, Mayo-Wilson E, Nezu AM, Rao SM (2018) Journal article reporting standards for quantitative research in psychology: the APA publications and communications BOARD task force report. Am Psychol 73(1):3–25

Aytug ZG, Rothstein HR, Zhou W, Kern MC (2012) Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organ Res Methods 15(1):103–133

Begg CB, Mazumdar M (1994) Operating characteristics of a rank correlation test for publication bias. Biometrics 50(4):1088–1101. https://doi.org/10.2307/2533446

Bergh DD, Aguinis H, Heavey C, Ketchen DJ, Boyd BK, Su P, Lau CLL, Joo H (2016) Using meta-analytic structural equation modeling to advance strategic management research: Guidelines and an empirical illustration via the strategic leadership-performance relationship. Strateg Manag J 37(3):477–497

Becker BJ (1992) Using results from replicated studies to estimate linear models. J Educ Stat 17(4):341–362

Becker BJ (1995) Corrections to “Using results from replicated studies to estimate linear models.” J Edu Behav Stat 20(1):100–102

Bergmann C, Tsuji S, Piccinini PE, Lewis ML, Braginsky M, Frank MC, Cristia A (2018) Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Dev 89(6):1996–2009

Bernerth JB, Aguinis H (2016) A critical review and best-practice recommendations for control variable usage. Pers Psychol 69(1):229–283

Bernerth JB, Cole MS, Taylor EC, Walker HJ (2018) Control variables in leadership research: A qualitative and quantitative review. J Manag 44(1):131–160

Bijmolt TH, Pieters RG (2001) Meta-analysis in marketing when studies contain multiple measurements. Mark Lett 12(2):157–169

Block J, Kuckertz A (2018) Seven principles of effective replication studies: Strengthening the evidence base of management research. Manag Rev Quart 68:355–359

Borenstein M (2009) Effect sizes for continuous data. In: Cooper H, Hedges LV, Valentine JC (eds) The handbook of research synthesis and meta-analysis. Russell Sage Foundation, pp 221–235

Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2009) Introduction to meta-analysis. John Wiley, Chichester

Book   Google Scholar  

Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2010) A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods 1(2):97–111

Borenstein M, Hedges L, Higgins J, Rothstein H (2013) Comprehensive meta-analysis (version 3). Biostat, Englewood, NJ

Borenstein M, Higgins JP (2013) Meta-analysis and subgroups. Prev Sci 14(2):134–143

Bosco FA, Steel P, Oswald FL, Uggerslev K, Field JG (2015) Cloud-based meta-analysis to bridge science and practice: Welcome to metaBUS. Person Assess Decis 1(1):3–17

Bosco FA, Uggerslev KL, Steel P (2017) MetaBUS as a vehicle for facilitating meta-analysis. Hum Resour Manag Rev 27(1):237–254

Burgard T, Bošnjak M, Studtrucker R (2021) Community-augmented meta-analyses (CAMAs) in psychology: potentials and current systems. Zeitschrift Für Psychologie 229(1):15–23

Cheung MWL (2015a) Meta-analysis: A structural equation modeling approach. John Wiley & Sons, Chichester

Cheung MWL (2015b) metaSEM: An R package for meta-analysis using structural equation modeling. Front Psychol 5:1521

Cheung MWL (2019) A guide to conducting a meta-analysis with non-independent effect sizes. Neuropsychol Rev 29(4):387–396

Cheung MWL, Chan W (2005) Meta-analytic structural equation modeling: a two-stage approach. Psychol Methods 10(1):40–64

Cheung MWL, Vijayakumar R (2016) A guide to conducting a meta-analysis. Neuropsychol Rev 26(2):121–128

Combs JG, Crook TR, Rauch A (2019) Meta-analytic research in management: contemporary approaches unresolved controversies and rising standards. J Manag Stud 56(1):1–18. https://doi.org/10.1111/joms.12427

DeSimone JA, Köhler T, Schoen JL (2019) If it were only that easy: the use of meta-analytic research by organizational scholars. Organ Res Methods 22(4):867–891. https://doi.org/10.1177/1094428118756743

DeSimone JA, Brannick MT, O’Boyle EH, Ryu JW (2020) Recommendations for reviewing meta-analyses in organizational research. Organ Res Methods 56:455–463

Duval S, Tweedie R (2000a) Trim and fill: a simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56(2):455–463

Duval S, Tweedie R (2000b) A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. J Am Stat Assoc 95(449):89–98

Egger M, Smith GD, Schneider M, Minder C (1997) Bias in meta-analysis detected by a simple, graphical test. BMJ 315(7109):629–634

Eisend M (2017) Meta-Analysis in advertising research. J Advert 46(1):21–35

Elliott JH, Synnot A, Turner T, Simmons M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review: 1. Introduction—the why, what, when, and how. J Clin Epidemiol 91:2330. https://doi.org/10.1016/j.jclinepi.2017.08.010

Field AP, Gillett R (2010) How to do a meta-analysis. Br J Math Stat Psychol 63(3):665–694

Fisch C, Block J (2018) Six tips for your (systematic) literature review in business and management research. Manag Rev Quart 68:103–106

Fortunato S, Bergstrom CT, Börner K, Evans JA, Helbing D, Milojević S, Petersen AM, Radicchi F, Sinatra R, Uzzi B, Vespignani A (2018) Science of science. Science 359(6379). https://doi.org/10.1126/science.aao0185

Geyer-Klingeberg J, Hang M, Rathgeber A (2020) Meta-analysis in finance research: Opportunities, challenges, and contemporary applications. Int Rev Finan Anal 71:101524

Geyskens I, Krishnan R, Steenkamp JBE, Cunha PV (2009) A review and evaluation of meta-analysis practices in management research. J Manag 35(2):393–419

Glass GV (2015) Meta-analysis at middle age: a personal history. Res Synth Methods 6(3):221–231

Gonzalez-Mulé E, Aguinis H (2018) Advancing theory by assessing boundary conditions with metaregression: a critical review and best-practice recommendations. J Manag 44(6):2246–2273

Gooty J, Banks GC, Loignon AC, Tonidandel S, Williams CE (2021) Meta-analyses as a multi-level model. Organ Res Methods 24(2):389–411. https://doi.org/10.1177/1094428119857471

Grewal D, Puccinelli N, Monroe KB (2018) Meta-analysis: integrating accumulated knowledge. J Acad Mark Sci 46(1):9–30

Gurevitch J, Koricheva J, Nakagawa S, Stewart G (2018) Meta-analysis and the science of research synthesis. Nature 555(7695):175–182

Gusenbauer M, Haddaway NR (2020) Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synth Methods 11(2):181–217

Habersang S, Küberling-Jost J, Reihlen M, Seckler C (2019) A process perspective on organizational failure: a qualitative meta-analysis. J Manage Stud 56(1):19–56

Harari MB, Parola HR, Hartwell CJ, Riegelman A (2020) Literature searches in systematic reviews and meta-analyses: A review, evaluation, and recommendations. J Vocat Behav 118:103377

Harrison JS, Banks GC, Pollack JM, O’Boyle EH, Short J (2017) Publication bias in strategic management research. J Manag 43(2):400–425

Havránek T, Stanley TD, Doucouliagos H, Bom P, Geyer-Klingeberg J, Iwasaki I, Reed WR, Rost K, Van Aert RCM (2020) Reporting guidelines for meta-analysis in economics. J Econ Surveys 34(3):469–475

Hedges LV, Olkin I (1985) Statistical methods for meta-analysis. Academic Press, Orlando

Hedges LV, Vevea JL (2005) Selection methods approaches. In: Rothstein HR, Sutton A, Borenstein M (eds) Publication bias in meta-analysis: prevention, assessment, and adjustments. Wiley, Chichester, pp 145–174

Hoon C (2013) Meta-synthesis of qualitative case studies: an approach to theory building. Organ Res Methods 16(4):522–556

Hunter JE, Schmidt FL (1990) Methods of meta-analysis: correcting error and bias in research findings. Sage, Newbury Park

Hunter JE, Schmidt FL (2004) Methods of meta-analysis: correcting error and bias in research findings, 2nd edn. Sage, Thousand Oaks

Hunter JE, Schmidt FL, Jackson GB (1982) Meta-analysis: cumulating research findings across studies. Sage Publications, Beverly Hills

Jak S (2015) Meta-analytic structural equation modelling. Springer, New York, NY

Kepes S, Banks GC, McDaniel M, Whetzel DL (2012) Publication bias in the organizational sciences. Organ Res Methods 15(4):624–662

Kepes S, McDaniel MA, Brannick MT, Banks GC (2013) Meta-analytic reviews in the organizational sciences: Two meta-analytic schools on the way to MARS (the Meta-Analytic Reporting Standards). J Bus Psychol 28(2):123–143

Kraus S, Breier M, Dasí-Rodríguez S (2020) The art of crafting a systematic literature review in entrepreneurship research. Int Entrepreneur Manag J 16(3):1023–1042

Levitt HM (2018) How to conduct a qualitative meta-analysis: tailoring methods to enhance methodological integrity. Psychother Res 28(3):367–378

Levitt HM, Bamberg M, Creswell JW, Frost DM, Josselson R, Suárez-Orozco C (2018) Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: the APA publications and communications board task force report. Am Psychol 73(1):26

Lipsey MW, Wilson DB (2001) Practical meta-analysis. Sage Publications, Inc.

López-López JA, Page MJ, Lipsey MW, Higgins JP (2018) Dealing with effect size multiplicity in systematic reviews and meta-analyses. Res Synth Methods 9(3):336–351

Martín-Martín A, Thelwall M, Orduna-Malea E, López-Cózar ED (2021) Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: a multidisciplinary comparison of coverage via citations. Scientometrics 126(1):871–906

Merton RK (1968) The Matthew effect in science: the reward and communication systems of science are considered. Science 159(3810):56–63

Moeyaert M, Ugille M, Natasha Beretvas S, Ferron J, Bunuan R, Van den Noortgate W (2017) Methods for dealing with multiple outcomes in meta-analysis: a comparison between averaging effect sizes, robust variance estimation and multilevel meta-analysis. Int J Soc Res Methodol 20(6):559–572

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS medicine. 6(7):e1000097

Mongeon P, Paul-Hus A (2016) The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics 106(1):213–228

Moreau D, Gamble B (2020) Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychol Methods. https://doi.org/10.1037/met0000351

O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S (2015) Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev 4(1):1–22

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A (2016) Rayyan—a web and mobile app for systematic reviews. Syst Rev 5(1):1–10

Owen E, Li Q (2021) The conditional nature of publication bias: a meta-regression analysis. Polit Sci Res Methods 9(4):867–877

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E,McDonald S,McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372. https://doi.org/10.1136/bmj.n71

Palmer TM, Sterne JAC (eds) (2016) Meta-analysis in stata: an updated collection from the stata journal, 2nd edn. Stata Press, College Station, TX

Pigott TD, Polanin JR (2020) Methodological guidance paper: High-quality meta-analysis in a systematic review. Rev Educ Res 90(1):24–46

Polanin JR, Tanner-Smith EE, Hennessy EA (2016) Estimating the difference between published and unpublished effect sizes: a meta-review. Rev Educ Res 86(1):207–236

Polanin JR, Hennessy EA, Tanner-Smith EE (2017) A review of meta-analysis packages in R. J Edu Behav Stat 42(2):206–242

Polanin JR, Hennessy EA, Tsuji S (2020) Transparency and reproducibility of meta-analyses in psychology: a meta-review. Perspect Psychol Sci 15(4):1026–1041. https://doi.org/10.1177/17456916209064

R Core Team (2021). R: A language and environment for statistical computing . R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/ .

Rauch A (2020) Opportunities and threats in reviewing entrepreneurship theory and practice. Entrep Theory Pract 44(5):847–860

Rauch A, van Doorn R, Hulsink W (2014) A qualitative approach to evidence–based entrepreneurship: theoretical considerations and an example involving business clusters. Entrep Theory Pract 38(2):333–368

Raudenbush SW (2009) Analyzing effect sizes: Random-effects models. In: Cooper H, Hedges LV, Valentine JC (eds) The handbook of research synthesis and meta-analysis, 2nd edn. Russell Sage Foundation, New York, NY, pp 295–315

Rosenthal R (1979) The file drawer problem and tolerance for null results. Psychol Bull 86(3):638

Rothstein HR, Sutton AJ, Borenstein M (2005) Publication bias in meta-analysis: prevention, assessment and adjustments. Wiley, Chichester

Roth PL, Le H, Oh I-S, Van Iddekinge CH, Bobko P (2018) Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution. J Appl Psychol 103(6):644–658. https://doi.org/10.1037/apl0000293

Rudolph CW, Chang CK, Rauvola RS, Zacher H (2020) Meta-analysis in vocational behavior: a systematic review and recommendations for best practices. J Vocat Behav 118:103397

Schmidt FL (2017) Statistical and measurement pitfalls in the use of meta-regression in meta-analysis. Career Dev Int 22(5):469–476

Schmidt FL, Hunter JE (2015) Methods of meta-analysis: correcting error and bias in research findings. Sage, Thousand Oaks

Schwab A (2015) Why all researchers should report effect sizes and their confidence intervals: Paving the way for meta–analysis and evidence–based management practices. Entrepreneurship Theory Pract 39(4):719–725. https://doi.org/10.1111/etap.12158

Shaw JD, Ertug G (2017) The suitability of simulations and meta-analyses for submissions to Academy of Management Journal. Acad Manag J 60(6):2045–2049

Soderberg CK (2018) Using OSF to share data: A step-by-step guide. Adv Methods Pract Psychol Sci 1(1):115–120

Stanley TD, Doucouliagos H (2010) Picture this: a simple graph that reveals much ado about research. J Econ Surveys 24(1):170–191

Stanley TD, Doucouliagos H (2012) Meta-regression analysis in economics and business. Routledge, London

Stanley TD, Jarrell SB (1989) Meta-regression analysis: a quantitative method of literature surveys. J Econ Surveys 3:54–67

Steel P, Beugelsdijk S, Aguinis H (2021) The anatomy of an award-winning meta-analysis: Recommendations for authors, reviewers, and readers of meta-analytic reviews. J Int Bus Stud 52(1):23–44

Suurmond R, van Rhee H, Hak T (2017) Introduction, comparison, and validation of Meta-Essentials: a free and simple tool for meta-analysis. Res Synth Methods 8(4):537–553

The Cochrane Collaboration (2020). Review Manager (RevMan) [Computer program] (Version 5.4).

Thomas J, Noel-Storr A, Marshall I, Wallace B, McDonald S, Mavergames C, Glasziou P, Shemilt I, Synnot A, Turner T, Elliot J (2017) Living systematic reviews: 2. Combining human and machine effort. J Clin Epidemiol 91:31–37

Thompson SG, Higgins JP (2002) How should meta-regression analyses be undertaken and interpreted? Stat Med 21(11):1559–1573

Tipton E, Pustejovsky JE, Ahmadi H (2019) A history of meta-regression: technical, conceptual, and practical developments between 1974 and 2018. Res Synth Methods 10(2):161–179

Vevea JL, Woods CM (2005) Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychol Methods 10(4):428–443

Viechtbauer W (2010) Conducting meta-analyses in R with the metafor package. J Stat Softw 36(3):1–48

Viechtbauer W, Cheung MWL (2010) Outlier and influence diagnostics for meta-analysis. Res Synth Methods 1(2):112–125

Viswesvaran C, Ones DS (1995) Theory testing: combining psychometric meta-analysis and structural equations modeling. Pers Psychol 48(4):865–885

Wilson SJ, Polanin JR, Lipsey MW (2016) Fitting meta-analytic structural equation models with complex datasets. Res Synth Methods 7(2):121–139. https://doi.org/10.1002/jrsm.1199

Wood JA (2008) Methodology for dealing with duplicate study effects in a meta-analysis. Organ Res Methods 11(1):79–95

Download references

Open Access funding enabled and organized by Projekt DEAL. No funding was received to assist with the preparation of this manuscript.

Author information

Authors and affiliations.

University of Luxembourg, Luxembourg, Luxembourg

Christopher Hansen

Leibniz Institute for Psychology (ZPID), Trier, Germany

Holger Steinmetz

Trier University, Trier, Germany

Erasmus University Rotterdam, Rotterdam, The Netherlands

Wittener Institut Für Familienunternehmen, Universität Witten/Herdecke, Witten, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jörn Block .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table 1 .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hansen, C., Steinmetz, H. & Block, J. How to conduct a meta-analysis in eight steps: a practical guide. Manag Rev Q 72 , 1–19 (2022). https://doi.org/10.1007/s11301-021-00247-4

Download citation

Published : 30 November 2021

Issue Date : February 2022

DOI : https://doi.org/10.1007/s11301-021-00247-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.15(5); 2019 May

Logo of ploscomp

Ten simple rules for carrying out and writing meta-analyses

Diego a. forero.

1 Laboratory of NeuroPsychiatric Genetics, Biomedical Sciences Research Group, School of Medicine, Universidad Antonio Nariño, Bogotá, Colombia

2 PhD Program in Health Sciences, School of Medicine, Universidad Antonio Nariño, Bogotá, Colombia

Sandra Lopez-Leon

3 Novartis Pharmaceuticals Corporation, East Hanover, New Jersey, United States of America

Yeimy González-Giraldo

4 Departamento de Nutrición y Bioquímica, Facultad de Ciencias, Pontificia Universidad Javeriana, Bogotá., Colombia

Pantelis G. Bagos

5 Department of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece

Introduction

In the context of evidence-based medicine, meta-analyses provide novel and useful information [ 1 ], as they are at the top of the pyramid of evidence and consolidate previous evidence published in multiple previous reports [ 2 ]. Meta-analysis is a powerful tool to cumulate and summarize the knowledge in a research field [ 3 ]. Because of the significant increase in the published scientific literature in recent years, there has also been an important growth in the number of meta-analyses for a large number of topics [ 4 ]. It has been found that meta-analyses are among the types of publications that usually receive a larger number of citations in the biomedical sciences [ 5 , 6 ]. The methods and standards for carrying out meta-analyses have evolved in recent years [ 7 – 9 ].

Although there are several published articles describing comprehensive guidelines for specific types of meta-analyses, there is still the need for an abridged article with general and updated recommendations for researchers interested in the development of meta-analyses. We present here ten simple rules for carrying out and writing meta-analyses.

Rule 1: Specify the topic and type of the meta-analysis

Considering that a systematic review [ 10 ] is fundamental for a meta-analysis, you can use the Population, Intervention, Comparison, Outcome (PICO) model to formulate the research question. It is important to verify that there are no published meta-analyses on the specific topic in order to avoid duplication of efforts [ 11 ]. In some cases, an updated meta-analysis in a topic is needed if additional data become available. It is possible to carry out meta-analyses for multiple types of studies, such as epidemiological variables for case-control, cohort, and randomized clinical trials. As observational studies have a larger possibility of having several biases, meta-analyses of these types of designs should take that into account. In addition, there is the possibility to carry out meta-analyses for genetic association studies, gene expression studies, genome-wide association studies (GWASs), or data from animal experiments. It is advisable to preregister the systematic review protocols at the International Prospective Register of Systematic Reviews (PROSPERO; https://www.crd.york.ac.uk/Prospero ) database [ 12 ]. Keep in mind that an increasing number of journals require registration prior to publication.

Rule 2: Follow available guidelines for different types of meta-analyses

There are several available general guidelines. The first of such efforts were the Quality of Reports of Meta-analyses of Randomized Controlled Trials (QUORUM) [ 13 ] and the Meta-analysis of Observational Studies in Epidemiology (MOOSE) statements [ 14 ], but currently, the Preferred Reporting Items for Systematic reviews and Meta-analyses (PRISMA) [ 15 ] has been broadly cited and used. In addition, there have been efforts to develop specific guidelines regarding meta-analyses for clinical studies (Cochrane Handbook; https://training.cochrane.org/handbook ), genetic association studies [ 16 ], genome-wide expression studies [ 17 ], GWASs [ 18 ], and animal studies [ 19 ].

Rule 3: Establish inclusion criteria and define key variables

You should establish in advance the inclusion (such as type of study, language of publication, among others) and exclusion (such as minimal sample size, among others) criteria. Keep in mind that the current consensus advises against strict criteria concerning language or sample size. You should clearly define the variables that will be extracted from each primary article. Broad inclusion criteria increase heterogeneity between studies, and narrow inclusion criteria can make it difficult to find studies; therefore, a compromise should be found. Prospective meta-analyses, which usually are carried out by international consortia, have the advantage of the possibility of including individual-level data [ 20 ].

Rule 4: Carry out a systematic search in different databases and extract key data

You can carry out your systematic search in several bibliographic databases, such as PubMed, Embase, The Cochrane Central Register of Controlled Trials, Scopus, Web of Science, and Google Scholar [ 21 ]. Usually, searching in several databases helps to minimize the possibility of failing to identify all published studies [ 22 ]. In some specific areas, searching in specialized databases is also worth doing (such as BIOSIS, Cumulative index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Sociological Abstracts, and EconLit, among others). Moreover, in other cases, direct search for the data is also advisable (i.e., Gene Expression Omnibus [GEO] database for gene expression studies) [ 23 ]. Usually, the bibliography of review articles might help to identify additional articles and data from other types of documents (such as theses or conference proceedings) that might be included in your meta-analysis. The Web of Science database can be used to identify publications that have cited key articles. Adequate extraction and recording of key data from primary articles are fundamental for carrying out a meta-analysis. Quality assessment of the included studies is also an important issue; it can be used for determining inclusion criteria, sensitivity analysis, or differential weighting of the studies. For example the Jadad scale [ 24 ] is frequently used for randomized clinical trials, the Newcastle–Ottawa scale [ 25 ] for nonrandomized studies, and QUADAS-2 for the Quality Assessment of Diagnostic Accuracy Studies [ 26 ]. It is recommended that these steps be carried out by two researchers in parallel and that discrepancies be resolved by consensus. Nevertheless, the reader must be aware that quality assessment has been criticized, especially when it reduces the studies to a single “quality” score [ 27 , 28 ]. In any case, it is important to avoid the confusion of using guidelines for the reporting of primary studies as scales for the assessment of the quality of included articles [ 29 , 30 ].

Rule 5: Contact authors of primary articles to ask for missing data

It is common that key data are not available in the main text or supplementary files of primary articles [ 31 ], leading to the need to contact the authors to ask for missing data. However, the rate of response from authors is lower than expected. There are multiple standards that promote the availability of primary data in published articles, such as the minimum information about a microarray experiment (MIAME) [ 32 ] and the STrengthening the REporting of Genetic Association Studies (STREGA) [ 33 ]. In some areas, such as genetics, in which it was shown that it is possible to identify an individual using the aggregated statistics from a particular study [ 34 ], strict criteria are imposed for data sharing, and specialized permissions might be needed.

Rule 6: Select the best statistical models for your question

For cases in which there is enough primary data of adequate quality for a quantitative summary, there is the option to carry out a meta-analysis. The potential analyst must be warned that in many cases the data are reported in noncompatible forms, so one must be ready to perform various types of transformations. Thankfully, there are methods available for extracting and transforming data regarding continuous variables [ 35 – 37 ], 2 × 2 tables [ 38 , 39 ], or survival data [ 40 ]. Frequently, meta-analyses are based on fixed-effects or random-effects statistical models [ 20 ]. In addition, models based on combining ranks or p -values are also available and can be used in specific cases [ 41 – 44 ]. For more complex data, multivariate methods for meta-analysis have been proposed [ 45 , 46 ]. Additional statistical examinations involve sensitivity analyses, metaregressions, subgroup analyses, and calculation of heterogeneity metrics, such as Q or I 2 [ 20 ]. It is fundamental to assess and, if present, explain the possible sources of heterogeneity. Although random-effects models are suitable for cases of between-studies heterogeneity, the sources of between-studies variation should be identified, and their impact on effect size should be quantified using statistical tests, such as subgroup analyses or metaregression. Publication bias is an important aspect to consider [ 47 ], since in many cases negative findings have less probability of being published. Other types of bias, such as the so-called “Proteus phenomenon” [ 48 ] or “winner’s curse” [ 49 ], are common in some scientific fields, such as genetics, and the approach of cumulative meta-analysis is suggested in order to identify them.

Rule 7: Use available software to carry metastatistics

There are several very user-friendly and freely available programs for carrying out meta-analyses [ 43 , 44 ], either within the framework of a statistical package such as Stata or R or as stand-alone applications. Stata and R [ 50 – 52 ] have dozens of routines, mostly user written, that can handle most meta-analysis tasks, even complex analyses such as network meta-analysis and meta-analyses of GWASs and gene expression studies ( https://cran.r-project.org/web/views/MetaAnalysis.html ; https://www.stata.com/support/faqs/statistics/meta-analysis ). There are also stand-alone packages that can be useful for general applications or for specific areas, such as OpenMetaAnalyst [ 53 ], NetworkAnalyst [ 54 ], JASP [ 55 ], MetaGenyo [ 56 ], Cochrane RevMan ( https://community.cochrane.org/help/tools-and-software/revman-5 ), EpiSheet (krothman.org/episheet.xls), GWAR [ 57 ], GWAMA [ 58 ], and METAL [ 59 ]. Some of these programs are web services or stand-alone software. In some cases, certain programs can present issues when they are run because of their dependency on other packages.

Rule 8: The records and study report must be complete and transparent

Following published guidelines for meta-analyses guarantees that the manuscript will describe the different steps and methods used, facilitating their transparency and replicability [ 15 ]. Data such as search and inclusion criteria, numbers of abstracts screened, and included studies are quite useful, in addition to details of meta-analytical strategies used. An assessment of quality of included studies is also useful [ 60 ]. A spreadsheet can be constructed in which every step in the selection criteria is recorded; this will be helpful to construct flow charts. In this context, a flow diagram describing the progression between the different steps is quite useful and might enhance the quality of the meta-analysis [ 61 ]. Records will be also useful if, in the future, the meta-analysis needs to be updated. Stating the limitations of the analysis is also important [ 62 ].

Rule 9: Provide enough data in your manuscript

A table with complete information about included studies (such as author, year, details of included subjects, DOIs, or PubMed IDs, among others) is quite useful in an article reporting a meta-analysis; it can be included in the main text of the manuscript or as a supplementary file. Software used for carrying out meta-analyses and to generate key graphs, such as forest plots, should be referenced. Summary effect measures, such as a pooled odds ratios or the counts used to generate them, should be always reported, including confidence intervals. It is also possible to generate figures with information from multiple forest plots [ 63 ]. In the case of positive findings, plots from sensitivity analyses are quite informative. In more-complex analyses, it is advisable to include in the supplementary files the scripts used to generate the results [ 64 ].

Rule 10: Provide context for your findings and suggest future directions

The Discussion section is an important scientific component in a manuscript describing a meta-analysis, as the authors should discuss their current findings in the context of the available scientific literature and existing knowledge [ 65 ]. Authors can discuss possible reasons for the positive or negative results of their meta-analysis, provide an interpretation of findings based on available biological or epidemiological evidence, and comment on particular features of individual studies or experimental designs used [ 66 ]. As meta-analyses are usually synthesizing the existing evidence from multiple primary studies, which commonly took years and large amounts of funding, authors can recommend key suggestions for conducting and/or reporting future primary studies [ 67 ].

As open science is becoming more important around the globe [ 68 , 69 ], adherence to published standards, in addition to the evolution of methods for different meta-analytical applications, will be even more important to carry out meta-analyses of high quality and impact.

Funding Statement

YG-G is supported by a PhD fellowship from Centro de Estudios Interdisciplinarios Básicos y Aplicados CEIBA (Rodolfo Llinás Program). DAF is supported by research grants from Colciencias and VCTI. PGB is partially supported by ELIXIR-GR, the Greek Research Infrastructure for data management and analysis in the biosciences. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Advertisement

Supported by

Meta Reports Profits More Than Tripled and Issues Its First Dividend

The company’s quarterly revenue rose 25% as its costs fell after layoffs last year.

  • Share full article

apa meta analysis

By Mike Isaac

Reporting from San Francisco

Meta on Thursday reported a 25 percent increase in quarterly revenue while profit more than tripled, a rise fueled by its ads business after a shaky 18 months of layoffs and a rocky digital advertising market.

The Silicon Valley company, which owns Facebook, Instagram and WhatsApp, also said it would issue its first dividend, of 50 cents a share. Dividends are typically associated with mature and slower-growth companies. Meta made the announcement as it spends heavily on capital investments, such as data centers and other infrastructure. The company also authorized an additional $50 billion in share buybacks.

The results pushed Meta’s shares up 14.5 percent to $449.51 in after-hours trading.

“Being a leaner company is helping us execute better and faster,” said Mark Zuckerberg, Meta’s founder and chief executive, on a call with investors on Thursday.

“Moving forward, a major goal will be building the most popular and advanced A.I. products and services,” he added. “If we succeed, everyone who uses our services will have a world-class A.I. assistant to help get things done.”

For the three months ended Dec. 31, Meta’s revenue was $40.1 billion, up from $32.2 billion a year ago and exceeding Wall Street estimates of $39 billion, according to data compiled by FactSet. Profit was $14 billion, up from $4.65 billion a year earlier.

The company benefited from a continued rebound in digital ads, though marketers remain cautious about where they allocate their advertising budgets. On Tuesday, Google reported search revenue and a profit margin for its latest quarter that fell short of Wall Street expectations because of modest advertising growth.

Meta has undergone a tumultuous few years as the global economy shifted and wobbled the online ad markets. The company has also faced scrutiny for privacy issues and the spread of misinformation and toxic content on its platforms.

Mr. Zuckerberg has shifted the company into the immersive digital world of the metaverse. Last year, he also embarked on what he called a “year of efficiency” to cut costs, including laying off tens of thousands of employees. The company’s work force has shrunk by 22 percent since December 2022 and now stands at 67,317 employees.

In the latest quarter, Reality Labs, the division responsible for building virtual and augmented reality glasses and products, passed $1 billion in revenue for the first time. But it is not making money, losing $4.6 billion over that period.

Meta remains under pressure to rein in harmful content across its platforms, which are regularly used by more than four billion people. On Wednesday, Mr. Zuckerberg — along with other tech chief executives — was grilled in a congressional hearing over the proliferation of online child sexual abuse material. Mr. Zuckerberg told attendees of the hearing that he was sorry for what families of children who suffered abuse online had experienced.

Despite that, more people are regularly coming back to Meta’s services. The company hosts more than 3.98 billion users across its apps each month, up 6 percent from a year ago.

It also continues to invest heavily in artificial intelligence and redesigning its data centers to keep up with other tech giants in the highly competitive field. Meta said part of its increased operating expenses came from attracting top technical talent in A.I.

“We’re playing to win,” Mr. Zuckerberg said of the company’s A.I. efforts. “Expect us to continue investing aggressively in this area.”

“General intelligence will be the theme of our product work as well,” he added, referring to a more advanced type of A.I. that can solve cognitive tasks that humans can perform.

The company said layoffs and some other cost-cutting measures, such as restructuring its data centers, were “completed.” It took a restructuring charge of $1.1 billion for the quarter.

Meta said it expected to continue growing in the first three months of the year, with revenue in the range of $35 billion to $37 billion.

The company also bumped up its capital expenditure forecast to $30 billion to $37 billion over the course of 2024. Much of that will include building out and maintaining its infrastructure, as well as the ballooning cost of A.I. research and development.

Mike Isaac is a technology correspondent for The Times based in San Francisco. He regularly covers Facebook and Silicon Valley. More about Mike Isaac

Explore Our Coverage of Artificial Intelligence

News  and Analysis

OpenAI announced that it was releasing a new version of ChatGPT that would remember all prior conversations with users  so it could use that information in future chats. The start-up also unveiled technology that creates videos that look like they were lifted from a Hollywood movie .

The F.T.C. outlawed unwanted robocalls generated by A.I. , amid growing concerns over election disinformation and consumer fraud facilitated by the technology.

Google has released Gemini, a smartphone app that behaves like a talking digital assistant as well as a conversational chatbot .

The Age of A.I.

A year ago, a rogue A.I. tried to break up our columnist’s marriage. Did the backlash that ensued help make chatbots too boring? Here’s how we tame d the chatbots.

Amid an intractable real estate crisis, fake luxury houses offer a delusion of one’s own. Here’s how A.I. is remodeling the fantasy home .

New technology has made it easier to insert digital, realistic-looking versions of soda cans and shampoo on videos on social media. A growing group of creators and advertisers is jumping at the chance for an additional revenue stream .

A start-up called Perplexity shows what’s possible for a search engine built from scratch with A.I. Are the days of turning to Google for answers numbered ?

Chafing at their dependence on the chipmaker Nvidia, Amazon, Google, Meta and Microsoft are racing to build A.I. chips of their own .

IMAGES

  1. References in a meta-analysis

    apa meta analysis

  2. Meta-Analysis Methodology for Basic Research: A Practical Guide

    apa meta analysis

  3. apa meta analysis

    apa meta analysis

  4. apa meta analysis

    apa meta analysis

  5. apa meta analysis

    apa meta analysis

  6. Apa Itu Meta Analisis

    apa meta analysis

VIDEO

  1. Meta roam apa lagi ini? 🤣

  2. Meta roam apa lagi ini? 🤣

  3. apa

  4. Meta apa yang Op sekarang‼️#motivation #mobilelegends #shorts

  5. apa

  6. What is Meta Analysis Research Technique

COMMENTS

  1. PDF MANUSCRIPT STRUCTURE AND CONTENT

    57 Sample Meta-Analysis (The numbers refer to numbered sec-tions in the Publication Manual. This abridged manuscript illus-trates the organizational structure characteristic of reports of meta-analyses. Of course, a complete meta-analysis would include a title page, an abstract page, and so forth.) The Sleeper Effect in Persuasion:

  2. References in a meta-analysis

    APA Style can help you cite the melodic works you love in your paper or manuscript. In this post, you will learn how to cite a single song or track reference. Back-to-school resources for students of APA Style: 2021 edition This post compiles instructional resources about APA Style for easy reference. How to cite translated works

  3. Quantitative research design (JARS-Quant)

    Analytic methods Meta-analyses In addition, JARS-Quant now divides hypotheses, analyses, and conclusions into primary, secondary, and exploratory groups. This should enhance the readability and replicability of the research.

  4. How to conduct a meta-analysis in eight steps: a practical guide

    Meta-analysis is a central method for knowledge accumulation in many scientific fields (Aguinis et al. 2011c; Kepes et al. 2013 ). Similar to a narrative review, it serves as a synopsis of a research question or field.

  5. Journal Article Reporting Standards for Qualitative Research

    Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. (1), 26-46. https://doi.org/10.1037/amp0000151

  6. PDF American Psychological Association (APA) Meta Analysis Reporting

    American Psychological Association (APA) Meta‐Analysis Reporting Standards (MARS) *Modified from Cooper, H. (2010). Research synthesis and meta‐analysis: A step‐by‐step approach (4th ed., Applied Social Research Methods Series, Vol. 2). Thousand Oaks, CA: Sage.

  7. PDF Journal Article Reporting Standards for Qualitative Primary

    Keywords: qualitative research methods, qualitative meta-analysis, reporting standards, mixed methods, APA Style Historically, APA Style, which is the basis for both the Publication Manual of the American Psychological Associ-ation (hereinafter referred to as the Publication Manual; APA, 2010) and APA Style CENTRAL, has defined the

  8. APA Dictionary of Psychology

    APA Dictionary of Psychology meta-analysis Updated on 04/19/2018 n. a quantitative technique for synthesizing the results of multiple studies of a phenomenon into a single result by combining the effect size estimates from each study into a single estimate of the combined effect size or into a distribution of effect sizes.

  9. A guide to conducting a meta-analysis.

    Meta-analysis is widely accepted as the preferred method to synthesize research findings in various disciplines. This paper provides an introduction to when and how to conduct a meta-analysis.

  10. Essentials of Qualitative Meta-Analysis

    This book is a step-by-step guide to conducting qualitative meta-analysis (QMA). This flexible and generic method synthesizes the findings of several research studies investigating similar phenomena.

  11. PDF Journal Article Reporting Standards for Quantitative Research in Psychology

    Publication Manual of the American Psychological Association (APA, 2010). Keywords: reporting standards, research methods, meta-analysis, APA Style The involvement of the American Psychological Associ-ation (APA) in the establishment of journal article reporting standards began as part of a mounting concern with trans-parency in science.

  12. Qualitative research design (JARS-Qual)

    Standards Qualitative Design Reporting Standards (JARS-Qual) (PDF, 141KB) Information recommended for inclusion in manuscripts that report primary qualitative research Qualitative Meta-Analysis Reporting Standards (PDF, 119KB) Information recommended for inclusion in manuscripts that report qualitative meta-analyses

  13. Citing and building on meta-analytic findings: A review and

    Meta-analysis is commonly used to quantitatively review research findings in the social sciences. This article looks at what happens next, after a meta-analysis is published. The authors examine how meta-analytic findings are cited in subsequent studies and whether the citing authors take full advantage of the information meta-analyses provide. A review of 1,489 citations to meta-analyses in ...

  14. Meta-analysis.

    Meta-analysis (literally, after or beyond analysis) is a statistical technique for combining the results of multiple studies. In this chapter, I provide a context for thinking about the logic of meta-analysis and show how it is often a better approach to determining what a collection of studies says about a relationship, especially when conducted in conjunction with a research synthesis.

  15. PDF Guide to Meta-analysis data Student Tested

    Meta-analytic techniques are explained in text and in the syntax, but certain skills such as maneuvering within the SPSS interface are assumed. This guide is organized in three phases, with corresponding SPSS datasets, syntax, and sample outputs.

  16. PDF APA Style JARS

    Describe the meta-analytic method (e.g., metasynthesis, meta-analysis, meta-ethnography, thematic synthesis, narrative synthesis, or critical interpretive analysis). Identify the purpose/goals of the study. Study Objectives/Research Goals (continued)

  17. PDF APA Style JARS

    APA Style JARS Journal Article Reporting Standards ... if meta-analysis was used, the specific methods used to integrate studies (e.g., effect-size metric, averaging method, the model used in homogeneity analysis) Protocol • List where the full protocol can be found (e.g., a supplement), or state that there was no

  18. Alert: Change in APA Style on Meta-Analysis References

    The APA Style guideline on how to format meta-analysis references changed from the fifth to the sixth edition of the Publication Manual. Because of unintended consequences of that change, we have reverted to the fifth edition format for meta-analysis references as of the most recent printing of the Manual. The fifth edition style rule had three ...

  19. PDF Meta‐Analytic Procedures Compared to Professional Standards (MARS and

    Objectives of the meta‐analyses are listed as research questions within each chapter. 5. Protocol and registration The coding sheets used in each meta‐analysis are available from the authors. 10. Data collection process All data were coded from manuscripts using two teams of two coders each.

  20. Ten simple rules for carrying out and writing meta-analyses

    Go to: Introduction In the context of evidence-based medicine, meta-analyses provide novel and useful information [ 1 ], as they are at the top of the pyramid of evidence and consolidate previous evidence published in multiple previous reports [ 2 ]. Meta-analysis is a powerful tool to cumulate and summarize the knowledge in a research field [ 3 ].

  21. Journal Article Reporting Standards (JARS)

    APA Style Journal Article Reporting Standards (APA Style Jars) are a set of standards designed for journal authors, reviewers, and editors to enhance scientific rigor in peer-reviewed journal articles.Educators and students can use APA Style JARS as teaching and learning tools for conducting high quality research and determining what information to report in scholarly papers.

  22. Reporting research syntheses and meta-analyses.

    The same concerns that have led to more detailed reporting standards for new data collections have led to similar efforts to establish standards for the reporting of other types of research. Particular attention has been focused on the reporting of research syntheses and meta-analyses. The American Psychological Association Working Group that developed Journal Article Reporting Standards also ...

  23. PDF Archives of Scientific Psychology Reporting Questions for Manuscripts

    Questionnaire for Meta-Analyses (Based on . APA Meta-Analysis Reporting Standards - MARS) 4 . o the meta-analysis methods (indicating whether a fixed or random model was used)? Yes ☐ No ☐ If no, please explain: o the main results (including the more important effect sizes and any important moderators of these effect sizes)?

  24. A meta-analysis of transformational leadership in hospitality research

    Purpose: Though the effect of transformational leadership (TFL) on followers has been largely examined in hospitality, the findings are rather inconsistent. This paper aims to provide a quantitative review for the relationship between TFL and follower outcomes in hospitality and a detailed analysis of the moderating variables (cultural differences, measurement instrument, rating sources and ...

  25. Meta Reports Profits More Than Tripled and Issues Its First Dividend

    Meta on Thursday reported a 25 percent increase in quarterly revenue while profit more than tripled, a rise fueled by its ads business after a shaky 18 months of layoffs and a rocky digital ...