When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

how to write an article for a peer reviewed journal

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

how to write an article for a peer reviewed journal

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

how to write an article for a peer reviewed journal

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

how to write an article for a peer reviewed journal

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

How to Write a Peer Review: 12 things you need to know

how to write an article for a peer reviewed journal

Joanna Wilkinson

Learning how to peer review is no small feat. You’re responsible for protecting the public from false findings and research flaws, while at the same time helping to uncover legitimate breakthroughs. You’re also asked to constructively critique the research of your peers, some of which has taken blood, sweat, tears and years to put together.

Despite this, peer review doesn’t need to be hard or nerve-wracking–or make you feel like you’re doomed to fail.

We’ve put together  12 tips to help with peer review , and you can learn the entire process with our free peer review training course, the  Web of Science Academy . This on-demand, practical course and comes with one-to-one support with your own mentor. You’ll have exclusive access to our peer review template, plenty of expert review examples to learn from, and by the end of it, you’ll not only be a certified reviewer, we’ll help put you in front of editors in your field.

The peer review process

Journal peer review is a critical tool for ensuring the quality and integrity of the research literature. It is the process by which researchers use their expert knowledge of a topic to assess an article for its accuracy and rigor, and to help make sure it builds on and adds to the current literature.

It’s actually a very structured process; it can be learned and improved the more you do it, and you’ll become faster and more confident as time goes on. Soon enough, you’ll even start benefiting from the process yourself.

Peer review not only helps to maintain the quality and integrity of literature in your field, it’s key to your own development as a researcher. It’s a great way to keep abreast of current research, impress editors at elite journals, and hone your critical analysis skills. It teaches you how to  review a manuscript ,  spot common flaws in research papers , and improve your own chances of being a  successful published author .

12-step guide to writing a peer review

To get the most out of the peer review process, you’ll want to keep some best practice tips and techniques in mind from the start. This will help you write a review around two to three pages (four maximum) in length.

We asked an expert panel of researchers what steps they take to ensure a thorough and robust review. We then compiled their advice into 12 easy steps with link to blog posts for further information:

1)   Make sure you have the right expertise.  Check out our post,  Are you the right reviewer?  for our checklist to assess whether you should take on a certain peer review request.

2)   Visit the journal web page to learn their reviewer-specific instructions.  Check the manuscript fits in the journal format and the references are standardised (if the editor has not already done so).

3)   Skim the paper very quickly to get a general sense of the article.  Underline key words and arguments, and summarise key points. This will help you quickly “tune in” to the paper during the next read.

4)   Sit in a quiet place and read the manuscript critically.  Make sure you have the tables, figures and references visible. Ask yourself key questions, including: Does it have a relevant title and valuable research question? Are key papers referenced? What’s the author’s motivation for the study and the idea behind it? Are the data and tools suitable and correct? What’s new about it? Why does that matter? Are there other considerations? Find out more in our  12-step guide to critically reviewing a manuscript .

5)   Take notes about the major, moderate and minor revisions that need to be made . You need to make sure you can put the paper down and come back to it with fresh eyes later on. Note-taking is essential for this.

6)   Are there any methodological concerns or common research errors?  Check out our guide for  common research flaws to watch out for .

7)   Create a list of things to check.  For example, does the referenced study actually show what is claimed in the paper?

8)   Assess language and grammar, and make sure it’s a right ‘fit’ for the journal.  Does the paper flow? Does it have connectivity? Does it have clarity? Are the words and structure concise and effective?

9)   Is it new research?  Check previous publications of the authors and of other authors in the field to be sure that the results were not published before.

10)   Summarise your notes for the editor.  This can include overview, contribution, strengths & weaknesses, and acceptability. You can also include the manuscript’s contribution/context for the authors (really just to clarify whether you view it similarly, or not), then prioritise and collate the major revisions and minor/specific revisions into feedback. Try to compile this in a logical way, grouping similar things under a common heading where possible, and numbering them for ease of reference.

11)   Give specific recommendations to the authors for changes.  What do you want them to work on? in the manuscript that the authors can do.

12)  Give your recommendation to the editor.

We hope these 12 steps help get you on your way for your first peer review, or improving the structure of your current reviews. And remember, if you’d like to master the skills involved in peer review and get access to our Peer Review Template, sign up for our  Web of Science Academy .

Our expert panel of reviewers include:  Ana Marie Florea  (Heinrich-Heine-Universität Düsseldorf),  James Cotter  (University of Otago), and  Robert Faff  (University of Queensland). These reviewers are all recipients of the Global Peer Review Awards powered by Publons. They also and boast hundreds of pre-publication peer reviews for more than 100 different journals and sit on numerous editorial boards.

Related posts

2024 journal citation reports: changes in journal impact factor category rankings to enhance transparency and inclusivity.

how to write an article for a peer reviewed journal

New Web of Science Grants Index helps researchers develop more targeted grant proposals

how to write an article for a peer reviewed journal

Three ways research offices can lead researchers to more funding

how to write an article for a peer reviewed journal

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER COLUMN
  • 08 October 2018

How to write a thorough peer review

  • Mathew Stiller-Reeve 0

Mathew Stiller-Reeve is a climate researcher at NORCE/Bjerknes Centre for Climate Research in Bergen, Norway, the leader of SciSnack.com, and a thematic editor at Geoscience Communication .

You can also search for this author in PubMed   Google Scholar

Scientists do not receive enough peer-review training. To improve this situation, a small group of editors and I developed a peer-review workflow to guide reviewers in delivering useful and thorough analyses that can really help authors to improve their papers.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

$29.99 / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

$199.00 per year

only $3.90 per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-018-06991-0

This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged. You can get in touch with the editor at [email protected].

Related Articles

how to write an article for a peer reviewed journal

Engage more early-career scientists as peer reviewers

Help graduate students to become good peer reviewers

  • Peer review

Submitting papers to several journals at once

Correspondence 28 NOV 23

What reproducibility crisis? New research protocol yields ultra-high replication rate

What reproducibility crisis? New research protocol yields ultra-high replication rate

News 09 NOV 23

ChatGPT use shows that the grant-application system is broken

ChatGPT use shows that the grant-application system is broken

Career Column 13 OCT 23

Open-access publishing: citation advantage is unproven

Correspondence 13 FEB 24

China conducts first nationwide review of retractions and research misconduct

China conducts first nationwide review of retractions and research misconduct

News 12 FEB 24

How journals are fighting back against a wave of questionable images

How journals are fighting back against a wave of questionable images

Structural biology for researchers with low vision

Structural biology for researchers with low vision

Career Column 19 FEB 24

Just 5 women have won a top maths prize in the past 90 years

Just 5 women have won a top maths prize in the past 90 years

News 16 FEB 24

A researcher-exchange programme made me a better doctor at home and abroad

A researcher-exchange programme made me a better doctor at home and abroad

Career Q&A 12 FEB 24

how to write an article for a peer reviewed journal

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to write an article for a peer reviewed journal

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Prevent plagiarism. Run a free check.

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved February 19, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

How to perform a peer review

You’ve received or accepted an invitation to review an article. Now the work begins. Here are some guidelines and a step by step guide to help you conduct your peer review. 

General and Ethical Guidelines

Step by Step Guide to Reviewing a Manuscript

Top Tips for Peer Reviewers

Working with Editors

Reviewing Revised Manuscripts

Tips for Reviewing a Clinical Manuscript

Reviewing Registered Reports

Tips for Reviewing Rich Media

Reviewing for Sound Science

How to Write and Publish a Research Paper for a Peer-Reviewed Journal

Affiliations.

  • 1 Department of Maternal and Child Health, University of North Carolina Gillings School of Global Public Health, 135 Dauer Dr, 27599, Chapel Hill, NC, USA.
  • 2 Department of Maternal and Child Health, University of North Carolina Gillings School of Global Public Health, 135 Dauer Dr, 27599, Chapel Hill, NC, USA. [email protected].
  • 3 Department of Epidemiology, University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI, 48109-2029, USA. [email protected].
  • PMID: 32356250
  • PMCID: PMC8520870
  • DOI: 10.1007/s13187-020-01751-z

Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper. In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common pitfalls for each section and recommend strategies to avoid them. Further, we give advice about target journal selection and authorship. In the online resource 1, we provide an example of a high-quality scientific paper, with annotations identifying the elements we describe in this article.

Keywords: Manuscripts; Publishing; Scientific writing.

© 2020. The Author(s).

  • Communication
  • Publishing*

3.3 How to peer review a review article

Thumbnail

Matt Pavlovich

About this video

Peer review is an essential part of publishing review articles, just like it is for research articles. Yet, without any experimental procedures to evaluate, data to assess, or conclusions to interpret, it isn’t always clear what you’re being asked to comment on or what standards you might use to recommend a manuscript for publication.

This module will explain what an editor looks for in a review of a review article and what your comments should focus on to be as helpful to the authors and the editor as possible. It will explain what the reviewer selection process looks like for Cell Press' Trends review journals and why you should accept that invitation to review a review article.

About the presenter

Thumbnail

Editor of Trends in Biotechnology, Cell Press

Matt Pavlovich is the editor of Trends in Biotechnology, Cell Press’s home for reviews in applied biology. He earned his BS in chemical engineering from the Georgia Institute of Technology and his PhD in chemical engineering from the University of California, Berkeley, where he studied the biological effects of air plasmas. He studied analytical chemistry as a postdoctoral researcher at Northeastern University, then joined Cell Press at the start of 2016. Matt is a senior manager in the Trends group and a part of the editorial teams for the Cell Press Podcast and Cell Mentor.

Certified Peer Review Course Sec 4

4.0 What next?

Certified Peer Reviewer Course

Certified Peer Reviewer Course Assessment

Introduction to the certified peer reviewer course.

Certified Peer Review Course

What to expect from the Certified Peer Reviewer Course?

1.0 Certified Peer Reviewer Course

1.1 What is peer review? Why peer review?

Trends in Biotechnology

How to write (and how not to write) a scientific review article

@TrendsinBiotech

Researcher Academy on Twitter

how to write an article for a peer reviewed journal

  • For Ophthalmologists
  • For Practice Management
  • For Clinical Teams
  • For Public & Patients

Museum of the Eye

  • Engage with the Academy
  • Events Just for YOs
  • Practice Management YOs
  • Ophthalmology Job Center
  • Young Ophthalmologists /
  • View Topics Topics All Topics Advocacy Annual Meeting Billing and Coding Education Features Business and Management Residents Tech and Science Resident Edition

How to Write Peer-Reviewed Journal Articles

  • Mark Complete

One of the more valuable, if not overlooked, elements of resident education is the intellectual challenge associated with critically evaluating published research. Typically, peer-reviewed journal articles center on evidence-based clinical ophthalmology that has been screened for errors in method, misuse of statistical analysis and overgeneralization of results. And the more that residents are a part of this process — actively participating in expressing critiques, offering alternating opinions and correcting colleagues — the better they will be equipped to engage in the larger professional conversation and thus advance the field of ophthalmology.

This two-part series will look at how residents can start participating in this process by (1) hosting discussion through a journal club, (2) engaging in the peer-review method and (3) contributing to the discussion by writing. This month, YO Info will explore writing and provide tips on how residents can optimize their chances of having an article accepted for publication in a peer-reviewed journal, with insight from Henry D. Jampel, MD, MHS. The deputy editor in chief of Ophthalmology , Dr. Jampel is also the Odd Fellows Professor of Ophthalmology at the Wilmer Eye Institute; his career encompasses both laboratory and clinical research.

Is publishing for me? Writing peer-reviewed articles and understanding the publication process can be essential for a career in ophthalmology. Conceiving a topic, executing the investigation and synthesizing the results can be an excellent complement to the weekly routine of clinic and surgery. “All young ophthalmologists should consider writing a peer-reviewed article,” said Dr. Jampel. “This variety is important in keeping a professional life interesting. Furthermore, the sense of accomplishment from having produced a piece of work that will be widely viewed, and that potentially can positively influence patient care, is satisfying.”

Do the prep. “The prerequisite for preparing an original article is having an observation or series of observations that, if disseminated, will add to the knowledge base of a community of readers,” said Dr. Jampel. “Therefore, one should not consider starting until the observations have been made.” He suggests looking at two broad categories of sources for observations: a planned prospective trial and an unusual observation made in the course of clinical care. An example of the former is hypothesizing that drug A was more effective than drug B for a particular disease and then developing a controlled trial as a test. An example of the latter might be a unique complication in several patients stemming from an alteration in surgical technique.

But is my idea strong enough? “If one has some objectivity about one’s own work,” said Dr. Jampel, “it can be compared with published manuscripts in your journal of choice.” Therefore, ask yourself some honest questions: Does your topic possess as broad an appeal as the majority of published manuscripts? Is your study design as strong? Will the number of subjects in your study compare well? If you find yourself answering “no” to these questions, there’s a good chance that your manuscript is not suitable for that particular publication. Fear not, though; a second opinion is always helpful. Dr. Jampel recommends contacting other experts in the field who are willing to review your manuscript and provide additional feedback about your work’s potential merit.

How to begin. Consult first with individuals who have experience in clinical research and publishing so that you can determine if your idea is original, interesting and, of course, feasible. Dr. Jampel also suggests consulting with a biostatistician if you anticipate that a prospective trial is in order. And remember to start the process early: almost all clinical research requires evaluation by an institutional review board.

In most cases, manuscripts are written in their entirety prior to submitting; however, some journal editors will review your abstract and advise on whether or not that particular journal is an appropriate place for your work. Writing does not always have to be a solo adventure either. According to Dr. Jampel, coauthors can play an essential role, particularly if you are new to the process: “The presence of a prominent coauthor can lend credibility to a submission, provided that it is clear that the senior author played an important role in the conception, execution and interpretation of the project.”

It’s paramount to give yourself the necessary amount of time to put forward the best product. Because you will be busy with a number of other tasks while pursuing your clinical research, submitting even one publication can be a lengthy process. “Given that most of our fellows can’t complete a project and have it published in the course of a year-long fellowship,” Dr. Jampel said, “I think that a more realistic time line for a young ophthalmologist in residency training is two years.”

Rejection guaranteed. Because most journals receive far more manuscripts than can be published, you should expect that your submission will be rejected. However, in many circumstances, the publication’s reviewers will provide suggestions for improvement, which can help increase the likelihood of the paper being accepted by another publication. “It’s expected that a manuscript rejected by one journal will be submitted to — and eventually accepted by — another journal,” Dr. Jampel noted. “However, an individual manuscript cannot be submitted to more than one journal at a time.”

It’s safe to say that you can’t ever predict how the review process will turn out. “I submitted one manuscript that was almost flat out rejected by a leading journal,” said Dr. Jampel, “and I was asked to revise it extensively, and then it might gain acceptance. Eventually the manuscript was accepted, but I assumed it would be considered near the bottom in quality of the articles in that journal. I was pleasantly surprised when it was selected by the editor as one of the most noteworthy articles of the year in that journal. The mottos are ‘don’t give up’ and ‘you never know.’”

Interested in submitting an article? Each journal has very specific instructions — and can offer unique practice advice — about how to submit your article for publication. Although you might not have a finalized manuscript ready at hand, perusing and comparing a handful of journals’ author instructions can serve as good prep for when you are ready to take the next step.

Here’s a sample:

  • Ophthalmology
  • American Journal of Ophthalmology
  • British Journal of Ophthalmology
  • JAMA Ophthalmology
  • Journal of Refractive Surgery
  • Survey of Ophthalmology

Next month we’ll look at hosting a journal club .

About the author: Mike Mott is a former assistant editor for EyeNet Magazine and contributing writer for YO Info .

how to write an article for a peer reviewed journal

Discussed in this article:

  • particular disease

The American Academy of Ophthalmology's newsletter for young ophthalmologists (YOs) — those in training as well as in their first few years in practice.

YO Info Editorial Board

Evan Silverstein, MD — Chair Sruthi Arepalli, MD Grayson W. Armstrong, MD Liane O. Dallalzadeh, MD Cherie A. Fathy, MD Bradley S. Henriksen, MD L. Claire Peterson, FRCOphth, MBBS Dagny C. Zhu, MD 

how to write an article for a peer reviewed journal

  • OPHTHALMOLOGY JOB CENTER Glaucoma Surgeon Outstanding Midwest Opportunity Available - Chicago, IL! Ophthalmologist - Cornea and External Disease Specialists See all jobs
  • About the Academy
  • Jobs at the Academy
  • Financial Relationships with Industry
  • Medical Disclaimer
  • Privacy Policy
  • Terms of Service
  • Statement on Artificial Intelligence
  • For Advertisers

FOLLOW THE ACADEMY

Medical Professionals

Facebook

Public & Patients

Instagram

  • Contributors List
  • Academic Writing Month (AcWriMo)
  • AcWri Twitter Chats
  • Journal Article Literature Review
  • Journal Article Abstracts
  • Journal Article Editing
  • Journal Article Marketing and Impact
  • Journal Article Planning
  • Journal Article Writing
  • Journal Article Peer Review
  • Book Literature Review
  • Book Editing
  • Book Marketing and Impact
  • Book Planning
  • Book Proposals
  • Book Publishing
  • Book Writing
  • Conference Paper Literature Review
  • Conference Paper Abstracts
  • Conference Paper Editing
  • Conference Paper Marketing and Impact
  • Conference Paper Planning
  • Conference Paper Presenting
  • Conference Paper Writing
  • Grant Literature Review
  • Grant Abstracts
  • Grant Completion Reporting
  • Grant Impact Statement
  • Grant Methods Section
  • Grant Writing
  • Reseach Project Planning
  • Blogging and Social Media
  • Experimental Digital Publishing
  • Open Access
  • Collaboration
  • Citations and Referencing
  • Productivity
  • Reading and Note-Taking
  • The PhDometer 3.0
  • AcWri Twitter Chat

Image by James Yang http://www.jamesyang.com

PhD2Published has several informative posts about writing journal articles, and more recently has featured a post outlining a potentially revolutionary collaborative peer review process for this kind of publishing. Todays post offers an alternative perspective; that of the journal article peer reviewer. Doing peer reviews provides important experience for those writing their own papers and may help writers consider what they should include based on what peer reviewers are looking for.

At some point in your scholarly career, you likely will get asked to review an article for a journal. In this post, I explain how I usually go about doing a peer review. I imagine that each scholar has their own way of doing this, but it might be helpful to talk openly about this task, which we generally complete in isolation.

Step One:  Accept the invitation to peer review . The first step in reviewing a journal article is to accept the invitation. When deciding whether or not to accept, take into consideration three things: 1) Do you have time to do the review by the deadline? 2) Is the article within your area of expertise? 3) Are you sure you will complete the review by the deadline? Once you accept the invitation, set aside some time in your schedule to read the article and write the review.

Step Two: Read the article . I usually read the article with a pen in hand so that I can write my thoughts in the margins as I read. As I read, I underline parts of the article that seem important, write down any questions I have, and correct any mistakes I notice.

Step Three: Write a brief summary of the article and its contribution . When I am doing a peer review, I sometimes do it all in one sitting – which will take me about two hours – or I read it one day and write it the next. Often, I prefer to do the latter to give myself some time to think about the article and to process my thoughts. When writing a draft of the review, the first thing I do is summarize the article as best I can in three to four sentences. If I think favorably of the article and believe it should be published, I often will write a longer summary, and highlight the strengths of the article. Remember that even if you don’t have any (or very many) criticisms, you still need to write a review. Your critique and accolades may help convince the editor of the importance of the article. As you write up this summary, take into consideration the suitability of the article for the journal. If you are reviewing for the top journal in your field, for example, an article simply being factually correct and having a sound analysis is not enough for it to be published in that journal. Instead, it would need to change the way we think about some aspect of your field.

Step Four: Write out your major criticisms of the article . When doing a peer review, I usually begin with the larger issues and end with minutiae. Here are some major areas of criticism to consider:

–          Is the article well-organized?

–          Does the article contain all of the components you would expect (Introduction, Methods, Theory, Analysis, etc)?

–          Are the sections well-developed?

–          Does the author do a good job of synthesizing the literature?

–          Does the author answer the questions he/she sets out to answer?

–          Is the methodology clearly explained?

–          Does the theory connect to the data?

–          Is the article well-written and easy to understand?

–          Are you convinced by the author’s results? Why or why not?

Step Five: Write out any minor criticisms of the article .  Once you have laid out the pros and cons of the article, it is perfectly acceptable (and often welcome) for you to point out that the table on page 3 is mislabeled, that the author wrote “compliment” instead of “complement” on page 7, or other minutiae. Correcting those minor errors will make the author’s paper look more professional if it goes out for another peer review, and certainly will have to be corrected before being accepted for publication.

Step Six: Review . Go over your review and make sure that it makes sense and that you are communicating your critiques and suggestions in as helpful a way as possible.

Finally, I will say that, when writing a review, be mindful that you are critiquing the article in question – not the author. Thus, make sure your critiques are constructive. For example, it is not appropriate to write: “The author clearly has not read any Foucault.” Instead, say: “The analysis of Foucault is not as developed as I would expect to see in an academic journal article.” Also, be careful not to write: “The author is a poor writer.” Instead, you can say: “This article would benefit from a close editing. I found it difficult to follow the author’s argument due to the many stylistic and grammatical errors.” Although you are an anonymous reviewer, the Editor knows who you are, and it never looks good when you make personal attacks on others. So, in addition to being nice, it is in your best interest.

Tanya Golash-Boza is  Associate Professor of Sociology and American Studies at the University of Kansas. She Tweets as @tanyagolashboza and has her own website .

Related posts:

  • Blind Spots: Using Collaborative Open Peer Review to Support PGR Publishing. Part 1 by Sarah Pett
  • Inger Mewburn – Seven Steps to Producing a Journal Article: Part One
  • Inger Mewburn – Seven Steps to Creating a Journal Article: Part Two
  • Rochelle Melander, ‘Write-A-Thon: Write your book in 26 days (and live to tell about it)’ – A Review by James Smith
  • Publishing in Academic Journals Part 1: Where Do I Begin?

gym said on August 5, 2012

Excellent post. I was checking constantly this blog and I am impressed! Extremely helpful information particularly the last part : ) I care for such info a lot. I was looking for this certain info for a very long time. Thank you and good luck. Check out my website to get more info about bodybuilding, if you like.

Dr subodh kumar said on March 8, 2013

thnx Ma’am

its a good article 4 the proceedings of peer reviewer

Dr Tulsi said on August 23, 2013

This article sounds to be extremely helpful to people like me who are learning to peer review. Thanks for this valuable sharing and all the guidelines you provided.

Dr Jess said on February 5, 2014

Brilliant! just what I needed to get started peer reviewing, and a great approach for achieving consistency. Thank you 🙂

Sarah said on March 18, 2014

Thanks for such a useful article.

Dr JB said on April 10, 2014

Thank you, I have just been invited to write my first review since completing my PhD, and this is what I need to get me started – thanks again.

lisa said on September 24, 2014

Thank-you! This is extremely helpful.

Mel said on October 2, 2014

Great post! Very useful, Thank you 🙂

Melissa said on April 12, 2015

Thank you for the great post. I am writing my first review and this is very helpful. 🙂

Al said on May 7, 2015

Thanks it is very helpful especially for new reviewer like me 😀

Ben said on June 16, 2015

Thank you for the informative article, it is very helpful to someone new to peer reviewing!

Adesanmi Adedotun said on June 18, 2015

Reviewing article isn’t as easy as most expected. I’m a postgraduate research student in an applied Mathematics and very curious at learning how to do article reviewing (not really with peer though) in order to help me to understand how to read other researchers work, get the idea behinds it, digest it, see the loophole, perhaps reason out why the loophole and how it could be taken care off without any hiccups. Going through this article is as well like reviewing the whole of the article, because the concept has to be understood otherwise, the key point of the article might not be digestive, however, I sincerely need to have a copy of this, read over it often to fine tune my understanding on how to review an article and perhaps become a good article reviewer to as well help me in my own area of specialization.

djlovepossum said on July 3, 2015

I have this bookmarked to read over before starting every peer review I’ve had to do recently. Thanks for the great tips/reminders!

Leave a reply Cancel reply

Your email address will not be published. Required fields are marked *

  • AcWriMo 2017 is All Yours!
  • Transitions: Reimagining your academic output by embracing a mind-set of abundance by Christopher Hill
  • Transitions: Transforming your students writing with feedback for future work by Christopher Hill
  • Transitioning Fields: Turning and facing the strange of a new academic specialization by Christopher Hill
  • Transitioning Abroad? Insight on Working Overseas from an Academic Rolling Stone. By Christopher Hill
  • #PulseOrlandoSyllabus : a Crowdsourced Teaching Resource https://t.co/46IVifCLmO via chronicle 08:21:03 PM June 20, 2016 from IFTTT Reply Retweet Favorite
  • The latest PhD2Published Daily! https://t.co/2e9No1r9QN Thanks to @TGJBrock @JijitangHello @historyben #phdchat #mustread 08:05:49 PM June 20, 2016 from Paper.li Reply Retweet Favorite
  • New on our Facebook page: https://t.co/QtGGdNEItN 02:40:45 PM June 20, 2016 from IFTTT Reply Retweet Favorite
  • writer’s block https://t.co/QjI7TtuHtE via ThomsonPat 02:39:41 PM June 20, 2016 from IFTTT Reply Retweet Favorite
View on Facebook

PhD2Published shared a link.

7 years ago

PhD2Published

Do Your Students Take Good Notes? – ProfHacker - Blogs - The Chronicle of Higher Education

www.chronicle.com

PhD2Published

Tip off the press' - #Hackademic #WritingTip by Jesse Stommel & Charlotte Frost: www.phd2published.com/2014/04/30/hackademic-guide-to-networking-tip-off-the-press/ #AcWri #AcademicNetworking ... See More See Less

Back Up Your Files Now – ProfHacker - Blogs - The Chronicle of Higher Education

  • SpringerLink shop

Reviewing review articles

A review article is written to summarize the current state of understanding on a topic, and peer reviewing these types of articles requires a slightly different set of criteria compared with empirical articles. Unless it is a systematic review/meta-analysis methods are not important or reported. The quality of a review article can be judged on aspects such as timeliness, the breadth and accuracy of the discussion, and if it indicates the best avenues for future research. The review article should present an unbiased summary of the current understanding of the topic, and therefore the peer reviewer must assess the selection of studies that are cited by the paper. As review article contains a large amount of detailed information, its structure and flow are also important.

Back  │  Next

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

University Libraries Logo

  • Ohio State University Libraries
  • Research Guides

Professional & Academic Writing for STEM Graduate Students & Faculty

  • Journal Articles
  • Getting Started
  • Conferences
  • Dissertations & Theses
  • Public Speaking & Presentations
  • Preprints & Postprints
  • Other Formats
  • Copyright Help

Head of Geology Library & Map Room Mathematical Sciences Librarian Science Education Specialist

Profile Photo

Introduction

This page has information and resources on authoring journal articles and choosing a journal to submit to. Since journal articles are by far the most common venue for submitting research, this page will be the most detailed.

Note: If you are a graduate student, ask your supervisor/advisor if there are expectations for where you submit. 

Recommended Ebooks

cover - Publishing Journal Articles: a scientific guide for new authors worldwide

Publishing Journal Articles: a scientific guide for new authors worldwide

cover - Writing for Peer Reviewed Journals

Writing for Peer Reviewed Journals

Limited to 3 users at a time

cover - Write It Up: practical strategies for writing and publishing journal articles

Write It Up: practical strategies for writing and publishing journal articles

Limited to 1 user at a time

Publishing in ASCE Journals

cover - Writing for Academic Journals

Writing for Academic Journals

To which journal should i submit.

The short answer is: the best journal for your topic. 

If you need to identify a journal, the easiest way to determine that is to search a library database for your topic and find what journals are most often publishing on your topic. Here are some steps to doing this:

  • Scopus or Web of Science are good for most topics, but may miss some niche and some society journals.
  • Subject-specific databases, such as MathSciNet (mathematics and statistics) and GeoRef (earth sciences) will get more titles in a discipline. Not every database has its own subject area.  See our subject guides to find other options.
  • If you're writing on education topics within any discipline , look at ERIC. Be sure to select your targeted education level below the search boxes since ERIC covers education at all levels.
  • If you're after journals, consider applying a journal article limit before or after search. Most databases also cover conference papers and books. Also consider language limits. 
  • After completing your search, most databases will list the source titles. You can then see what titles publish most heavily on your topic. You can either look closer at those items in that database or go to explore your journal further at the publisher site. Here are some examples of what these lists look like from some of the databases we mentioned - these are found on the left side of the screen after searching:

how to write an article for a peer reviewed journal

These are journals that may be good fits for your planned publication. Your next step is to find the homepage for the journal and learn more about its requirements, scope, etc. You can choose to do a simple web search (Google, Bing) or you can look at our  Online Journals List . Note that there may be journals with similar or in some cases, the same name.

Look for a section such as "About," "Scope," "Submit," or "For Authors" to find out info about the journal and requirements.

If you have a specific journal in mind, look for an author template and citation style requirements so you can begin your writing with the correct format and style and not have to make changes later.

What is an impact factor?

Many people want to target journals with high impact factors for their submission. Impact factor is a proprietary number calculated and available via Journal Citation Reports . This tool allows you to see data about how well a journal is cited and see titles ranked by discipline. Some caveats about this information:

  • If a journal is not in this tool, it does not have an impact factor as this number is proprietary.
  • If a journal does not have an impact factor, that does not mean it is not a good journal. It may be from a niche publisher, a new journal, or there may be other reasons for this.
  • There are other places to get impact information for journals, such as Scopus and MathSciNet collect their own data and impact values, for example, in addition to some free online options, like Eigenfactor.org .
  • These numbers vary by discipline. For example, a high impact factor in one discipline may be a middle or even low one in another.
  • Don't submit to a journal just because it has great metrics. It may not be the best fit for your topic. Many high impact factor titles are also very selective about publishing content that is "ground breaking."  

What are predatory journals?

Predatory journals are journals that are really after money and do not care about quality. They will often send out mass (but not necessarily targeted to a specific audience) emails to solicit submissions. Beall's List is one of the most popular sites to find out more information about specific titles and publishers. However, this list is not perfect as new titles are appearing all of the time. When in doubt, feel free to contact me or an appropriate subject librarian .

A few things to consider when looking at a journal:

  • Who is on the editorial board? Are these respected researchers in the field? Are they from appropriate institutions (universities, research labs, etc.). It can be helpful to see who is publishing in the journal as well.
  • Is the publisher fairly known (a major commercial publisher, a college/university, or society)? Don't discount small publishers.
  • Does something look "off" to you? For example, is content missing, are there large fees, is peer review not mentioned, etc.?

Can I get help to pay to make my article open access?

Yes! The University Libraries has options to help with getting journal articles open access.  See information about transformative agreements for details, applicable journal titles for each publisher, and the latest information.

We have options for specific publishers, including:

  • Cambridge University Press
  • Elsevier (Hybrid)
  • Institute of Physics (IOP) 
  • Royal Society of Chemistry FAQ
  • Springer (Hybrid)
  • Taylor & Francis

Journal Lists by Discipline for Major Publishers

Below you will see links to journal lists for select major general and subject-specific publishers. From there, you can search and/or browse options are available.​​​​​​ There are many more small publishers and independent journals to choose from. 

Large Multi-Subject Publishers

  • Oxford University Press
  • Science (AAAS)

Select Discipline-Specific Publishers

  • AGU (published by Wiley)
  • ASM International
  • ASA  (published by Taylor & Francis)

Interested in open access only journals? Visit DOAJ and search the titles or filter by subject on the list. Be sure to evaluate the journals on this list as they are from many different sources.

Recommended Videos

These YouTube videos from journal publishers are on writing and submitting journal articles. 

  • How to Get Published in an Academic Journal  (Sage)
  • What to think about before you start to write a journal article  (Taylor & Francis)
  • Top Ten Tips for writing and submitting a journal article  (IOP)
  • Which APS journal is right for your research?  (APS)
  • Publishing Open Access with Cambridge University Press
  • How to Get Published: Submitting Your Paper  (Sage)
  • ACS Publishing Panel: Publishing advice, tips, and tricks  (ACS)
  • Publishing in SIAM Journals: What and How  (SIAM)
  • Using ACM Word Template: Video Tutorial  (ACM)
  • From the Editor of IEEE Access: How to Get Published in an Open Access Journal  (IEEE)
  • The Journey of an Article at Springer  (Springer)
  • << Previous: Getting Started
  • Next: Conferences >>

The Ohio State University

© The Ohio State University - University Libraries 1858 Neil Avenue Mall, Columbus, OH 43210 Phone: (614) 292-OSUL (6785) | Fax: (614) 292-9101

Request an alternate format of this page | Accessibility | Privacy Policy

Print Page Login to LibApps

Home

Get Started

Take the first step and invest in your future.

colonnade and university hall

Online Programs

Offering flexibility & convenience in 51 online degrees & programs.

student at laptop

Prairie Stars

Featuring 15 intercollegiate NCAA Div II athletic teams.

campus in spring

Find your Fit

UIS has over 85 student and 10 greek life organizations, and many volunteer opportunities.

campus in spring

Arts & Culture

Celebrating the arts to create rich cultural experiences on campus.

campus in spring

Give Like a Star

Your generosity helps fuel fundraising for scholarships, programs and new initiatives.

alumni at gala

Bragging Rights

UIS was listed No. 1 in Illinois and No. 3 in the Midwest in 2023 rankings.

lincoln statue fall

  • Quick links Applicants & Students Important Apps & Links Alumni Faculty and Staff Community Admissions How to Apply Cost & Aid Tuition Calculator Registrar Orientation Visit Campus Academics Register for Class Programs of Study Online Degrees & Programs Graduate Education International Student Services Study Away Student Support Bookstore UIS Life Dining Diversity & Inclusion Get Involved Health & Wellness COVID-19 United in Safety Residence Life Student Life Programs UIS Connection Important Apps UIS Mobile App Advise U Canvas myUIS i-card Balance Pay My Bill - UIS Bursar Self-Service Registration Email Resources Bookstore Box Information Technology Services Library Orbit Policies Webtools Get Connected Area Information Calendar Campus Recreation Departments & Programs (A-Z) Parking UIS Newsroom Connect & Get Involved Update your Info Alumni Events Alumni Networks & Groups Volunteer Opportunities Alumni Board News & Publications Featured Alumni Alumni News UIS Alumni Magazine Resources Order your Transcripts Give Back Alumni Programs Career Development Services & Support Accessibility Services Campus Services Campus Police Facilities & Services Registrar Faculty & Staff Resources Website Project Request Web Services Training & Tools Academic Impressions Career Connect CSA Reporting Cybersecurity Training Faculty Research FERPA Training Website Login Campus Resources Newsroom Campus Calendar Campus Maps i-Card Human Resources Public Relations Webtools Arts & Events UIS Performing Arts Center Visual Arts Gallery Event Calendar Sangamon Experience Center for Lincoln Studies ECCE Speaker Series Community Engagement Center for State Policy and Leadership Illinois Innocence Project Innovate Springfield Central IL Nonprofit Resource Center NPR Illinois Community Resources Child Protection Training Academy Office of Electronic Media University Archives/IRAD Institute for Illinois Public Finance

Request Info

Home

How to Review a Journal Article

rainbow over colonnade

  • Request Info Request info for....     Undergraduate/Graduate     Online     Study Away     Continuing & Professional Education     International Student Services     General Inquiries

For many kinds of assignments, like a  literature review , you may be asked to offer a critique or review of a journal article. This is an opportunity for you as a scholar to offer your  qualified opinion  and  evaluation  of how another scholar has composed their article, argument, and research. That means you will be expected to go beyond a simple  summary  of the article and evaluate it on a deeper level. As a college student, this might sound intimidating. However, as you engage with the research process, you are becoming immersed in a particular topic, and your insights about the way that topic is presented are valuable and can contribute to the overall conversation surrounding your topic.

IMPORTANT NOTE!!

Some disciplines, like Criminal Justice, may only want you to summarize the article without including your opinion or evaluation. If your assignment is to summarize the article only, please see our literature review handout.

Before getting started on the critique, it is important to review the article thoroughly and critically. To do this, we recommend take notes,  annotating , and reading the article several times before critiquing. As you read, be sure to note important items like the thesis, purpose, research questions, hypotheses, methods, evidence, key findings, major conclusions, tone, and publication information. Depending on your writing context, some of these items may not be applicable.

Questions to Consider

To evaluate a source, consider some of the following questions. They are broken down into different categories, but answering these questions will help you consider what areas to examine. With each category, we recommend identifying the strengths and weaknesses in each since that is a critical part of evaluation.

Evaluating Purpose and Argument

  • How well is the purpose made clear in the introduction through background/context and thesis?
  • How well does the abstract represent and summarize the article’s major points and argument?
  • How well does the objective of the experiment or of the observation fill a need for the field?
  • How well is the argument/purpose articulated and discussed throughout the body of the text?
  • How well does the discussion maintain cohesion?

Evaluating the Presentation/Organization of Information

  • How appropriate and clear is the title of the article?
  • Where could the author have benefited from expanding, condensing, or omitting ideas?
  • How clear are the author’s statements? Challenge ambiguous statements.
  • What underlying assumptions does the author have, and how does this affect the credibility or clarity of their article?
  • How objective is the author in his or her discussion of the topic?
  • How well does the organization fit the article’s purpose and articulate key goals?

Evaluating Methods

  • How appropriate are the study design and methods for the purposes of the study?
  • How detailed are the methods being described? Is the author leaving out important steps or considerations?
  • Have the procedures been presented in enough detail to enable the reader to duplicate them?

Evaluating Data

  • Scan and spot-check calculations. Are the statistical methods appropriate?
  • Do you find any content repeated or duplicated?
  • How many errors of fact and interpretation does the author include? (You can check on this by looking up the references the author cites).
  • What pertinent literature has the author cited, and have they used this literature appropriately?

Following, we have an example of a summary and an evaluation of a research article. Note that in most literature review contexts, the summary and evaluation would be much shorter. This extended example shows the different ways a student can critique and write about an article.

Chik, A. (2012). Digital gameplay for autonomous foreign language learning: Gamers’ and language teachers’ perspectives. In H. Reinders (ed.),  Digital games in language learning and teaching  (pp. 95-114). Eastbourne, UK: Palgrave Macmillan.

Be sure to include the full citation either in a reference page or near your evaluation if writing an  annotated bibliography .

In Chik’s article “Digital Gameplay for Autonomous Foreign Language Learning: Gamers’ and Teachers’ Perspectives”, she explores the ways in which “digital gamers manage gaming and gaming-related activities to assume autonomy in their foreign language learning,” (96) which is presented in contrast to how teachers view the “pedagogical potential” of gaming. The research was described as an “umbrella project” consisting of two parts. The first part examined 34 language teachers’ perspectives who had limited experience with gaming (only five stated they played games regularly) (99). Their data was recorded through a survey, class discussion, and a seven-day gaming trial done by six teachers who recorded their reflections through personal blog posts. The second part explored undergraduate gaming habits of ten Hong Kong students who were regular gamers. Their habits were recorded through language learning histories, videotaped gaming sessions, blog entries of gaming practices, group discussion sessions, stimulated recall sessions on gaming videos, interviews with other gamers, and posts from online discussion forums. The research shows that while students recognize the educational potential of games and have seen benefits of it in their lives, the instructors overall do not see the positive impacts of gaming on foreign language learning.

The summary includes the article’s purpose, methods, results, discussion, and citations when necessary.

This article did a good job representing the undergraduate gamers’ voices through extended quotes and stories. Particularly for the data collection of the undergraduate gamers, there were many opportunities for an in-depth examination of their gaming practices and histories. However, the representation of the teachers in this study was very uneven when compared to the students. Not only were teachers labeled as numbers while the students picked out their own pseudonyms, but also when viewing the data collection, the undergraduate students were more closely examined in comparison to the teachers in the study. While the students have fifteen extended quotes describing their experiences in their research section, the teachers only have two of these instances in their section, which shows just how imbalanced the study is when presenting instructor voices.

Some research methods, like the recorded gaming sessions, were only used with students whereas teachers were only asked to blog about their gaming experiences. This creates a richer narrative for the students while also failing to give instructors the chance to have more nuanced perspectives. This lack of nuance also stems from the emphasis of the non-gamer teachers over the gamer teachers. The non-gamer teachers’ perspectives provide a stark contrast to the undergraduate gamer experiences and fits neatly with the narrative of teachers not valuing gaming as an educational tool. However, the study mentioned five teachers that were regular gamers whose perspectives are left to a short section at the end of the presentation of the teachers’ results. This was an opportunity to give the teacher group a more complex story, and the opportunity was entirely missed.

Additionally, the context of this study was not entirely clear. The instructors were recruited through a master’s level course, but the content of the course and the institution’s background is not discussed. Understanding this context helps us understand the course’s purpose(s) and how those purposes may have influenced the ways in which these teachers interpreted and saw games. It was also unclear how Chik was connected to this masters’ class and to the students. Why these particular teachers and students were recruited was not explicitly defined and also has the potential to skew results in a particular direction.

Overall, I was inclined to agree with the idea that students can benefit from language acquisition through gaming while instructors may not see the instructional value, but I believe the way the research was conducted and portrayed in this article made it very difficult to support Chik’s specific findings.

Some professors like you to begin an evaluation with something positive but isn’t always necessary.

The evaluation is clearly organized and uses transitional phrases when moving to a new topic.

This evaluation includes a summative statement that gives the overall impression of the article at the end, but this can also be placed at the beginning of the evaluation.

This evaluation mainly discusses the representation of data and methods. However, other areas, like organization, are open to critique.

  • Open access
  • Published: 16 February 2024

A guide for social science journal editors on easing into open science

  • Priya Silverstein   ORCID: orcid.org/0000-0003-0095-339X 1 , 2 ,
  • Colin Elman   ORCID: orcid.org/0000-0003-1004-4640 3 ,
  • Amanda Montoya   ORCID: orcid.org/0000-0001-9316-8184 4 ,
  • Barbara McGillivray   ORCID: orcid.org/0000-0003-3426-8200 5 ,
  • Charlotte R. Pennington   ORCID: orcid.org/0000-0002-5259-642X 6 ,
  • Chase H. Harrison   ORCID: orcid.org/0009-0009-1849-7397 7 ,
  • Crystal N. Steltenpohl   ORCID: orcid.org/0000-0001-5049-9354 8 ,
  • Jan Philipp Röer   ORCID: orcid.org/0000-0001-7774-3433 9 ,
  • Katherine S. Corker   ORCID: orcid.org/0000-0002-7971-1678 10 ,
  • Lisa M. Charron   ORCID: orcid.org/0000-0002-0719-9695 11 , 12 ,
  • Mahmoud Elsherif   ORCID: orcid.org/0000-0002-0540-3998 13 ,
  • Mario Malicki   ORCID: orcid.org/0000-0003-0698-1930 14 , 15 , 16 ,
  • Rachel Hayes-Harb   ORCID: orcid.org/0000-0003-3115-6289 17 ,
  • Sandra Grinschgl   ORCID: orcid.org/0000-0001-6666-9426 18 ,
  • Tess Neal   ORCID: orcid.org/0000-0002-9528-8638 19 , 20 ,
  • Thomas Rhys Evans   ORCID: orcid.org/0000-0002-6670-0718 21 ,
  • Veli-Matti Karhulahti   ORCID: orcid.org/0000-0003-3709-5341 22 ,
  • William L. D. Krenzer   ORCID: orcid.org/0000-0002-2074-9698 23 ,
  • Anabel Belaus   ORCID: orcid.org/0000-0001-9657-8496 24 ,
  • David Moreau   ORCID: orcid.org/0000-0002-1957-1941 25 ,
  • Debora I. Burin   ORCID: orcid.org/0000-0002-2515-719X 26 , 27 ,
  • Elizabeth Chin   ORCID: orcid.org/0000-0001-8486-6562 28 ,
  • Esther Plomp   ORCID: orcid.org/0000-0003-3625-1357 29 , 30 ,
  • Evan Mayo-Wilson   ORCID: orcid.org/0000-0001-6126-2459 31 ,
  • Jared Lyle   ORCID: orcid.org/0000-0001-8623-7612 32 ,
  • Jonathan M. Adler   ORCID: orcid.org/0000-0001-9050-2116 33 ,
  • Julia G. Bottesini   ORCID: orcid.org/0000-0002-5340-993X 3 ,
  • Katherine M. Lawson   ORCID: orcid.org/0000-0002-4083-9797 34 ,
  • Kathleen Schmidt   ORCID: orcid.org/0000-0002-9946-5953 1 ,
  • Kyrani Reneau   ORCID: orcid.org/0000-0002-6535-3689 32 ,
  • Lars Vilhuber   ORCID: orcid.org/0000-0001-5733-8932 35 ,
  • Ludo Waltman   ORCID: orcid.org/0000-0001-8249-1752 36 ,
  • Morton Ann Gernsbacher   ORCID: orcid.org/0000-0003-0397-3329 37 ,
  • Paul E. Plonski   ORCID: orcid.org/0000-0002-6748-6020 38 ,
  • Sakshi Ghai   ORCID: orcid.org/0000-0002-8488-0273 39 ,
  • Sean Grant   ORCID: orcid.org/0000-0002-7775-3022 40 ,
  • Thu-Mai Christian   ORCID: orcid.org/0000-0002-3658-9692 41 ,
  • William Ngiam   ORCID: orcid.org/0000-0003-3567-3881 42 , 43 &
  • Moin Syed   ORCID: orcid.org/0000-0003-4759-3555 44  

Research Integrity and Peer Review volume  9 , Article number:  2 ( 2024 ) Cite this article

346 Accesses

19 Altmetric

Metrics details

Journal editors have a large amount of power to advance open science in their respective fields by incentivising and mandating open policies and practices at their journals. The Data PASS Journal Editors Discussion Interface (JEDI, an online community for social science journal editors: www.dpjedi.org ) has collated several resources on embedding open science in journal editing ( www.dpjedi.org/resources ). However, it can be overwhelming as an editor new to open science practices to know where to start. For this reason, we created a guide for journal editors on how to get started with open science. The guide outlines steps that editors can take to implement open policies and practices within their journal, and goes through the what, why, how, and worries of each policy and practice. This manuscript introduces and summarizes the guide (full guide: https://doi.org/10.31219/osf.io/hstcx ).

Peer Review reports

Many research practices that were previously considered acceptable, or even normative, in the social sciences are now widely recognized to work against our collective goal of establishing a cumulative knowledge base rooted in rigorous evidence. Issues with credibility have been documented across different disciplines [ 1 , 2 , 3 , 4 , 5 , 6 , 7 ], and there is increasing awareness that many scientific incentives actively encourage, reward and propagate poor research and statistical methods [ 8 , 9 ]. Indeed, existing scholarly practices have strong roots in a deeply embedded problematic research culture that favors quantity over quality of research products, “positive” results, and flashy findings, all occurring within a hierarchical, status-based system [ 10 ].

To overcome many of these issues, open science (also referred to as “open research” or “open scholarship”) has been advanced as an alternative model for science and one that will better contribute to our collective goals and build a more reproducible scientific knowledge base. Yet, knowledge of open science principles and practices remains uneven across different social scientific constituencies. The purpose of the present article is to focus on a particularly important constituency – journal editors – by providing a guide that will help them to adopt open science practices in their journals.

Open science is a broad term that does not have a single agreed upon definition, with different definitions foregrounding different aspects of the scientific ecosystem. For example, the UNESCO Recommendation on Open Science [ 11 ] adopts a broad definition that highlights the system of knowledge production:

“…open science is defined as an inclusive construct that combines various movements and practices aiming to make multilingual scientific knowledge openly available, accessible and reusable for everyone, to increase scientific collaborations and sharing of information for the benefits of science and society, and to open the processes of scientific knowledge creation, evaluation and communication to societal actors beyond the traditional scientific community. It comprises all scientific disciplines and aspects of scholarly practices, including basic and applied sciences, natural and social sciences and the humanities, and it builds on the following key pillars: open scientific knowledge, open science infrastructures, science communication, open engagement of societal actors and open dialogue with other knowledge systems.” ( https://unesdoc.unesco.org/ark:/48223/pf0000379949 )

In contrast, the Framework for Open and Reproducible Research Training [ 12 ] defines open science more narrowly toward specific behaviors of researchers:

“An umbrella term reflecting the idea that scientific knowledge of all kinds, where appropriate, should be openly accessible, transparent, rigorous, reproducible, replicable, accumulative, and inclusive, all which are considered fundamental features of the scientific endeavor.” [ 13 ].

Underlying these definitional differences are shared values in the conduct and dissemination of science, and the need to move toward the principles and behaviors of open science has been widely recognized across the sciences. Research communities across many disciplines have begun to develop stronger norms inspired by open science, including psychology [ 2 , 14 , 15 , 16 ], genetics [ 17 ], biomedicine [ 18 ], animal behavior [ 4 , 19 ], economics [ 20 , 21 , 22 , 23 , 24 ], education [ 21 , 25 , 26 , 27 , 28 , 29 ], political science [ 30 ], public health [ 31 , 32 ], science and technology studies [ 33 ], scientometrics [ 34 ], and sociology [ 35 , 36 ], among others (see ([ 37 ]). Despite some progress, all stakeholders in the system need to do better at adopting and implementing open science practices, and our focus is on how to help editors accomplish this.

Over the last two decades, the operational procedures of scholarly social science have been substantially modified to facilitate the goals of open science [ 10 , 14 , 38 ]. However, the shift toward open science remains a work-in-progress. Recognizing that open science is fundamentally about behaviors, various established theories of behavior change, including the Behaviour Change Wheel [ 39 ] and Theoretical Domains Framework [ 40 ], have been applied to understand how to increase uptake of open science among researchers [ 41 , 42 ] and journal editors [ 43 ]. It is clear that multiple institutional stakeholders – funders, disciplinary associations, data repositories, universities, publishers, preprint servers, and journals – have the capacity to influence how research is conducted by enacting one or more of these strategies [ 44 , 45 ].

Among the different stakeholders, journals are in a strong position to foster open science practices. They are particularly influential institutions in the academic ecosystem because they are a major vehicle for organizing and disseminating academic communications, promoting knowledge, and producing success signals for individual researchers [ 46 ]. This influence has not always been beneficial to science, as journal policies and practices are one source of a problem that open science is meant to address (for example publication bias), but it is precisely this capacity to incentivize and shape scholarly behavior which now offers a broad opportunity to promote transparency and openness.

The degree of power that journal editors have to enact change in policies varies considerably across journals, as publishers and scientific societies often play central roles in setting policies. Nevertheless, journal editors are in a position to be a major influence on policies that can help move disciplines toward more rigorous open science and improve research culture. Most obviously, journals’ mandates to authors can make publication conditional on following open science practices [ 14 ]. Less directly, journals can include processes that endorse, encourage, and reward open science, such as promoting replication, offering the Registered Reports publishing model, and encouraging preprinting. Journals that instantiate open science can also be opinion leaders in their respective disciplines, helping to make the practices more visible and customary.

With this potential in mind, the Transparency and Openness Promotion (TOP) Guidelines [ 14 ] were developed to provide tools (including template policy text) to help journal editors adopt open science policies within their journals. The TOP Guidelines are a resource for editors, covering many areas of open science (data citation; data, materials, and code transparency; design and analysis; preregistration; replication) currently available in English, Spanish, Portuguese, and Finnish ( https://www.cos.io/initiatives/top-guidelines ). In a separate but related initiative, the Center for Open Science (COS) ranks journals on their adherence to these guidelines via the TOP Factor, a metric that has been proposed as an alternative to citation-based metrics such as the Journal Impact Factor [ 47 , 48 ].

Whereas the TOP Guidelines and TOP Factor provide a good deal of information for and about journals, there are at least two gaps that the current paper hopes to fill. First, TOP covers a very limited set of behaviors that, while useful, do not cover the full spectrum of open science practices (for example topics related to open and transparent peer review, open access, and encouraging diversity). Second, although the TOP Guidelines provide information on the different standards (the what, including template policy text), they do not focus on why editors should implement these standards, how editors should implement the procedures and practices that uphold the policies, and the worries (and associated mitigations) they may have about implementing these new procedures and practices. Some recent work has begun to explore these questions (for example [ 43 , 49 ], but this work has been focused on the limited scope of the TOP Guidelines. Thus, our focus in the present guide is to expand the range of open science considerations for journal editors to the broad spectrum of issues that may be relevant across the social sciences, while still maintaining a connection to TOP where relevant.

Accordingly, the purpose of the present guide is to help editors “ease into open science” by providing information on the what , why , how , and worries associated with adopting a broad range of open science initiatives at their journals (see an example for Registered Reports in Table  1 and a list of key topics covered in Fig.  1 ). This approach was modeled on Kathawalla et al.’s guide [ 50 ] for graduate students and their advisors. We hope that the present article will prove similarly useful for editors given their pivotal role in the scientific ecosystem. Editors are typically overburdened with multiple roles and obligations, including responsibilities as researchers, teachers, managers, and members of their individual scientific communities. Indeed, editorial positions are typically taken on in addition to other “regular” work, often for little or no compensation. Thus, this guide is especially well-catered to the majority of editors who have limited time to dedicate to editing, and even less time for designing and implementing new journal policies and practices.

figure 1

Key topics covered in the full guide

We intend for the present guide to be useful in at least two ways. First, it provides straightforward descriptions of open science policies, procedures, and practices, as well as guidance and signposted resources for how to implement them, with consideration of their potential challenges and costs. When making recommendations, we rely on findings from science studies research where they are available, but not all recommendations are evidence-based. Instead, some of them draw upon our own experiences as authors, peer reviewers, and journal editors. We hope that this guide will help to encourage future empirical studies on the effects of different policy changes (for example randomized controlled trials) where they have not yet been conducted.

Second, we reject an “all or nothing” approach to open science. Different journals will have different needs, resources, audiences, governance structures, and any number of other factors that will determine which open science practices they do or do not want to adopt. The current guide is designed so that editors can follow a “buffet approach” to implementing open science initiatives [ 60 ], whereby editors can pick and choose whatever makes sense for their journal, resources, and field. The number of possible reforms is large and can potentially feel overwhelming, and so we stress the need to “ease in” and adopt reforms as feasible.

Guide development

This guide emerged collaboratively, led by leadership from the Journal Editors Discussion Interface (JEDI; https://dpjedi.org ), a Data Preservation Alliance for the Social Science (Data-PASS; http://www.data-pass.org ) initiative. Data-PASS is a voluntary partnership of organizations created to archive, catalog, and preserve data used for social science research. Data-PASS was initially formed with the goal of its members providing a shared catalog of holdings, and serving as alternative venues should a member be unable to continue preserving data. Over time, Data-PASS’s members have also collaboratively developed and implemented additional supporting resources for open science. For example, between 2016 and 2020, Data-PASS held a series of workshops, bringing together social science journal editors and representatives from Data-PASS to discuss issues surrounding open science in journal editing. In 2021, NSF funding facilitated the launch of JEDI – an online forum where social science journal editors can ask and answer questions, share information and expertise, and build a fund of collective knowledge. JEDI is composed of a Google group of several hundred members that functions as a listserv and a collection of resources ( https://dpjedi.org/resources ) compiled from conversations in the group.

While discussion of any editorial function or concern is encouraged, a large focus of conversations on the listserv (and therefore also the resources collection) has been on journal open science initiatives. In May 2022, we held a workshop focused on open science and the future of scholarly publishing that had over 100 registrants (“ May the force be with you: Resources to help journal editors advance their fields ”: https://dpjedi.org/events/may-the-force-be-with-you ). This workshop resulted in many valuable additions to our resource collection. However, due to the bottom-up nature of how the resources had been collected, there was large variation in the amount, quality, and type of content for different topics. Therefore, a gap was identified for a comprehensive guide for social science journal editors on open science initiatives. This guide was originally conceptualized by a small team from JEDI leadership: Priya Silverstein (previous JEDI community manager), Moin Syed (past JEDI steering committee member), and Colin Elman (PI on the NSF grant supporting JEDI, and ex officio steering committee member).

A first draft of this guide was written by Priya Silverstein. This draft formed the starting point for a hackathon (a type of goal-focused participatory workshop; [ 61 ]) at the 2022 annual meeting of the Society for the Improvement of Psychological Science (SIPS) where approximately 20 contributors came together to draft around 30 sections, covering a wide range of journal open science initiatives. This was a largely psychology-focused team, and so following the hackathon, the guide was opened up for contributions from the JEDI steering committee. The JEDI steering committee is composed of 13 invited members: six representatives from the data repositories included in Data-PASS and one editor each from anthropology, criminology, economics, education, political science, psychology, and sociology. After the JEDI steering committee contributed to the guide, it was opened up for contributions from the wider JEDI community of over 400 members, including editors from across the social sciences and “Scholarly Knowledge Builders” (JEDI topic experts in different aspects of open science, metascience of peer review, and publishing). Priya Silverstein took the lead in integrating contributions, comments, and edits until a long-form guide had been finalized (see [ 62 ] for the full guide). The current paper serves as a shortened summary of the full guide, including the initiatives that we believe are relatively easy to implement and/or likely to apply to most (or many) social science journals. JEDI has now received further funding and, as part of its expansion and continuation, the full guide will continue to be expanded and updated regularly.

Summary of the guide for social science journal editors

In this summary, we have grouped the initiatives included in the full guide into three categories: those relating to the principles of Transparency, Credibility, and Accessibility. These are not to be taken as rigid categories. Rather, this categorization scheme is to emphasize that within open science there are different goals that can be achieved through different initiatives. Many of the initiatives will work towards more than one of the principles, as well as other principles that we do not specifically emphasize here (for example Reproducibility). Moreover, some of the initiatives have the potential to have differential impact on the principles, and thus could possibly be in conflict with one another (for example open peer review should increase transparency, but could reduce accessibility if some authors or reviewers are reluctant to engage in the practice). In what follows, we briefly review each of the three principles, and highlight select entries from the full guide to provide further explanations of the initiatives, the benefits of adopting them, the potential concerns that might arise, and their mitigations. Each emboldened initiative has its own dedicated table in the full guide.

Transparency

The principle of transparency pertains to researchers being honest and forthcoming about all aspects of the scientific process [ 13 ]. For researchers, transparency involves disclosing all theoretical, methodological, and analytic decisions made throughout the research cycle. Similarly, transparency for journals and editors involves documentation and availability of all phases of the publication cycle. Behaviors in support of transparency are meant to reduce the knowledge asymmetry between producers and consumers of research, allowing the latter to make more informed judgments about research quality [ 63 ]. Journal editors are well-positioned to advance initiatives that support transparency among researchers and at the journals themselves. The full guide details several specific initiatives, but here we briefly summarize a small number that have seen widespread adoption and are relatively straightforward to adopt.

First, journals can encourage or mandate that authors Share Data (either alongside an empirical manuscript or through publishing a Data Descriptor ), Share Code , Share Materials associated with their study (i.e. the different measures, stimuli, and procedures that were used to produce the findings reported in the article), and/or require Adherence to Methodological Reporting Guidelines (using transparency standards to specify specific elements of study design which should be disclosed). These initiatives allow interested parties to reproduce the study findings, reuse research components, fit alternative models, catch errors, and critically evaluate research decisions and outcomes more rigorously.

There are many different options for journals to facilitate authors’ sharing data, code, and materials, ranging from informal incentives to formal mandates. For example, journals can offer open data and open materials badges as a way of incentivising sharing (although note that the efficacy of badges is still debated, see for example [ 64 ]). However, studies have found that data indicated as “being available upon request” rarely are actually available in practice [ 65 ]. Journals could instead implement policies to make acceptance conditional upon direct links to open data, code, and materials (although, note, even having policies to mandate data and code sharing do not mean that analyses are reproducible [ 66 ]). Journals can then decide whether they will check whether the shared materials are complete or not (is this the responsibility of the author or of the journal). For editors, it is obviously more resource intensive to check whether materials are complete, so this step is only possible if the journal has the personnel, time, and money to do this. Alternatively, the guidelines to authors can include information about the sharing of such research components and clear instructions that outline their responsibility to ensure these are complete.

Sharing data, code, and materials is a useful first step for promoting transparency, but even if fully available, researchers could have engaged in undisclosed data-dependent decision-making (for example questionable research practices such as p -hacking), and thus full transparency is still not realized. Registration , sometimes also called preregistration (see [ 67 ]) can bolster transparency and involves creating an accessible, time-stamped plan that details the study research questions and/or hypotheses, design, and analysis. Journals can incentivise preregistration through offering a preregistration badge to articles that meet set criteria [ 68 ]. Journals can also require a statement specifying whether or not a study has been preregistered, require that, if a study was preregistered, the preregistration protocol is available for peer review, or require preregistration for all empirical work. Editors may worry that preregistration is not relevant to their particular field or methodology, but there are now options for preregistration across a wide variety of disciplines and types of research (for example exploratory and confirmatory research, quantitative and qualitative; [ 69 , 70 ], though there may be variations in how preregistration is used within different epistemologies.

Taking this practice a step further, journals can adopt the Registered Reports (RR) publishing format [ 57 ]. RRs are a publishing format where initial peer review is performed on a study protocol before data collection and/or analyses are conducted. Accepted Stage 1 manuscripts are given ‘In-Principle Acceptance’ (IPA), moving the focus to the process of research and away from the results [ 13 , 71 ]. The Center for Open Science provides resources for editors wanting to adopt RRs in their journal (see “Resources for Editors” and “FAQ” tabs on the Registered Reports page: https://www.cos.io/initiatives/registered-reports ), and more than 300 journals currently offer this publishing format. These resources include email templates for all key sections, submission templates, and journal policy guidelines. Editors may worry that RRs are not necessary or relevant to their discipline, but they can be conducted in any field that follows a research workflow which begins with study planning and design. Journals can also participate in Peer Community In Registered Reports (PCI-RR), which offers free and transparent pre- and post-study recommendations by overseeing the peer review of RR preprints (see https://rr.peercommunityin.org/ ). PCI-RR accepts submissions proposing either the analysis of new data or pre-existing data using levels of bias control, and is therefore a suitable format for studies at any stage of the research process. The peer review is conducted independently of journals by expert ‘recommenders’ and this is endorsed by a growing list of journals that accept PCI-RR recommendations (known as ‘PCI-RR friendly’ journals). So, journal editors can outsource part, or all, of the peer review process.

Turning to journal-focused rather than author-focused initiatives that support transparency, journals can adopt Transparent Peer Review , in which they make the peer review reports, editorial decisions [ 72 ], and author reply letters openly available alongside published manuscripts. Instituting transparent peer review will require that the online publishing platform allows this logistically, or the materials can be uploaded as supplementary materials. Sometimes a distinction is made between transparent peer review, where the content of the review process is made open, and Open Peer Review , where the reviewers’ identities are made open as well. Open peer review comes with additional concerns; for example, some people are concerned that reviewers will be treated unfairly for giving unfavorable reviews or that open identities will enable bias and retaliation. For a balanced scoping review of the pros and cons of open peer review, see [ 73 ].

Credibility

The principle of credibility refers to the degree of trustworthiness or believability of research findings. Whereas transparency focuses on making the research process and products open to evaluation, credibility relates to the evaluations of these processes and products for their quality. In many ways transparency is a necessary, but not sufficient, condition for credibility [ 74 ]. That is, sharing data and materials, or preregistering a study, do not enhance credibility of findings on their own, but allow for a fuller assessment of it. Behaviors in support of the principle of credibility are aimed at increasing the trustworthiness of a study, or body of research. Here, we highlight three initiatives aimed at encouraging the credibility of published research.

First, journals can explicitly encourage the submission of Replication Studies . Replication studies are a broad class of studies that can vary in motivation and procedure, but generally refer to a study for which any outcome would be considered diagnostic evidence about a claim from prior research [ 75 ]. Publishing replication studies enhances credibility because they are informative about how well previously observed results hold up in new and different settings. Although researchers have to conduct replications in order for journals to publish them, journals have a long and notorious history of discouraging the submission of replication studies in favor of novel findings [ 76 , 77 ]. Instead, journals can make explicit that they encourage replications through clear language in their guidelines, and/or through implementing “the pottery barn rule” [ 78 ], whereby journals agree to publish a direct replication of any study previously published in their journal.

A particularly useful format for receiving replication studies is via the aforementioned Registered Reports publishing format. From the journal side, Registered Reports promote credibility because they signal that the journal evaluates and selects articles for publication based on their conceptualization and methods, and not based on the perceived novelty or potential impact of the findings. This format is well-suited to replication studies because it requires clear statements of the criteria that will substantiate a claim of replication before the results are known, thus combatting interpretative bias. Another solution to combatting interpretive bias is to have Results Masked Review , where the replication has been completed but only the introduction and methods are initially reviewed. Results masked review can be an especially useful route for publishing replication studies that have been completed but previously file-drawered.

Replication studies are seen as one important, even if currently limited, behavior that can increase the cumulativeness of scientific findings, contributing to the “self-correcting” nature of science. Contrary to the meaning inherent in the term “self-correcting,” science does not, in fact, correct itself [ 79 ], and is instead “other-correcting” [ 71 ]. People must actively work to correct the scientific record, and in this regard, journal editors can play a key role. It is incumbent on editors to act swiftly and prudently when Handling Corrections (updating an article to correct something and/or publishing an erratum or corrigendum) and Retractions (indicating that previously-published articles should no longer be relied upon). Retraction best practice can include outlining the specific reasons and timeline/history of retractions, and ensuring that there is a link to an open access version of retraction information on the manuscript webpage [ 80 ]. Beyond those behaviors that are in response to problems that arise, journals can commit to Publishing Scientific Critique [ 81 ], which involves publishing peer-initiated critical discourse related to specific research articles previously published in the same journal. The decision of whether to go for a retraction, correction, or publishing scientific critique can be made based on how serious the issue is with the original article (note that this can be difficult to determine in practice as people will disagree on how serious the issue is). Corrections should be reserved for changes that do not unequivocally undermine the findings of the original article (for example the labels of two groups on a graph have been swapped by mistake, but the conclusions still stand), whereas retractions should be reserved for changes that do (for example, the labels of two groups on a graph have been swapped by mistake, but the conclusions were based on the incorrectly labeled graph and so the data actually support the opposite conclusion). Post-publication critique can either stand alone (if there is room for nuance and disagreement), or can be accompanied by a correction or retraction (if the authors of the critique have found an error in the original manuscript). In the eventuality that an article is also retracted, the post-publication critique can still be published to aid transparency and document the article's history.

Accessibility

The principle of accessibility pertains to ensuring that all who are interested are able to consume, evaluate, and otherwise interact with research products and processes. Much of the discussion around accessibility in scientific publishing focuses on Open Access , which refers to articles being made freely available and reusable. Open access can take many forms, including Green Open Access (when the work is openly accessible from a public repository), Gold Open Access (when the work is immediately openly accessible upon publication via a journal website), and Platinum or Diamond Open Access (a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay an article processing charge [APC]) (Parsons et al., 2022). The emphasis on open access intersects with the desire to be inclusive to a diverse range of people, especially those from under resourced groups.

A strong initiative that both journal editors and authors can take to facilitate accessibility is through the integration of Preprints , which is a broad term that refers to versions of manuscripts posted to publicly-available repositories (e.g., arXiv, bioRxiv, PsyArXiv, SocArXiv). The term “preprints” applies to papers that have not (yet) been submitted for publication in a journal or are currently under review at a journal. “Preprints” can also be used to describe author-formatted versions of articles already published in a journal [ 82 ]. This latter category is more aptly labeled as “postprints,” yet are commonly referred to as preprints nevertheless. Even if a journal does not provide their own open access option, allowing authors to post preprints ensures that the research is accessible by everyone. Journals should allow authors to post the final accepted version to allow readers access to the most up-to-date version of a manuscript.

A behavior that is more directly in line with the inclusivity aspect of accessibility is Supporting Language Editing , which involves checking and correcting papers’ grammar, spelling, usage, and punctuation to ensure the meaning is understood by the intended audience. The ability to report on scientific findings via publication should not be overly inhibited due to reasonable limitations in written expression. For example, publishing in English language journals can be a barrier for researchers for whom English is not a first language, and supporting language editing is a way to support researchers if/when they write manuscripts in English. One way of implementing this is that existing publishing fees can be made to include editing services [ 83 ].

Finally, diversifying the journal editorial team can increase accessibility by providing leadership roles to scholars from under-represented backgrounds and countries. Diversifying the journal editorial team can be accomplished in at least two ways. An easy, low-effort approach is to issue an Open Call for New Reviewers , in which the journal makes clear that they seek to diversify the pool of reviewers that the editorial team relies on. This approach can be extended to search for new members of the editorial team as well, rather than relying on pre-existing networks that are prone to bias. These calls can be as simple as an open form linked on the journal’s website and social media accounts (if available). However, the process to become an editor at a journal is not transparent, which can be even more of a barrier for those people who come from historically excluded groups or from academic environments that contain fewer current editors, as they will not have access to this “hidden curriculum”. A more intensive approach is to develop a program focused on Editorial Fellowships/Training , which involves helping to train new associate/action editors from under-represented backgrounds. For example, the American Psychological Association now offers editorial fellowships at several of their journals, whereby fellows act as action/associate editors for a number of manuscripts over the year, with regular mentorship from a more experienced editor and financial compensation for their time.

Considerations for implementing open science initiatives

The purpose of the present guide is to help editors implement a broad spectrum of open science initiatives at their journals. In the full guide, we provide straightforward descriptions of open science policies, procedures, and practices. We also detail what the initiative involves, why it should be implemented, how it can be implemented, and potential worries that could arise. We urge editors consulting this guide to endorse the “buffet approach” to open science [ 60 ] and not try to do too much at once, but rather to pick and choose the initiatives that make the most sense for the journal and the field. In the present paper we have highlighted a few of the key initiatives that editors could adopt, but the full guide includes over twenty other open science initiatives. Of course, even the full guide is not exhaustive; other actions by editors may increase openness and transparency at their journals. Moreover, as more metascientific research is conducted, we hope that we will also be able to better evaluate the effectiveness of many of these recommendations, especially across journals with different emphases (methodological, disciplinary, etc.).

We acknowledge that there are potential pitfalls surrounding some of the initiatives that we propose, which is why we felt it important to include a discussion of “worries” associated with each initiative in the full guide. For example, it is important to acknowledge that a more open science is not always a more equitable science [ 84 , 85 , 86 , 87 , 88 , 89 ]. In particular, it is possible that peer review with open identities (where the identities of reviewers are disclosed) could lead to reviewers being treated unfairly for giving unfavorable reviews, or to opening up the potential for bias or retaliation [ 90 , 91 , 92 ]. Openness is a value, but we have several other scientific values including but not limited to equity, diversity, speed, and cost-effectiveness. Editors will often have to make tradeoffs when deciding which initiatives to implement, in line with the values of their journal, scientific society (if associated with a society), and field [ 93 ].

Some of the worries we discuss are field or methodology specific. Particularly, social science employs a variety of methodologies which are often divided into “quantitative” and “qualitative.” It is important to remember, however, that these umbrella groups are largely labels of convenience, and that each, in turn, involves a range of approaches to data generation and analysis. In quantitative social science, for example, articles based on the statistical analysis of administrative data are likely to present different open science challenges than field experiments. Similarly, qualitative social science might involve a range of methods and epistemic commitments, such as ethnography, ordinary language Boolean process tracing, and Qualitative Comparative Analysis (QCA), which differ on data generation, the role of algorithmic analysis, and whether data are presented in tabular or textual form [ 94 , 95 , 96 ].

While open science needs to acknowledge and accommodate this heterogeneity, it also offers opportunities for different communities to learn from each other [ 97 ]. For example, while preregistration was pioneered in experimental research, in some circumstances qualitative researchers might benefit from its use [ 98 , 99 ]. Similarly, positionality statements – statements used to contextualize the researcher and research environment [ 13 , 100 ] – have been common in some varieties of qualitative research for several years, but are only just beginning to be considered a useful tool for quantitative research [ 101 ]. We also note that there are areas where there is ongoing debate about the possibility or usefulness of adopting certain open science initiatives for qualitative research. For example, some argue that replication should be encouraged in qualitative research [ 102 ], whereas others argue that there are still open questions about whether replication is possible, desirable, or even aligned with the epistemic foundations of qualitative research [ 85 , 103 ]. Regardless of the perceived epistemic value of replications [ 85 , 104 ], we believe it is non-controversial to suggest that journals should be open to publishing them, and surely should not have a policy that explicitly disallows them.

There is also debate about the advantages and disadvantages of open qualitative data, including the ethical considerations in making such data discoverable [ 105 , 106 , 107 ]. Although there is much to consider, qualitative data should not be automatically excluded from open data requirements [ 59 ]. While there will be some cases where data cannot or should not be shared (due to regulatory constraints imposed by law, the specifics of IRB approval, or ethical concerns), even sensitive data can sometimes be shared [ 108 , 109 ] specifically with a reviewer or data editor solely for the purposes of review and who agrees to not make the data more widely available. Much “restricted secondary data” can be accessed by others, just not without restriction, and this can be clearly stated in a Data Availability Statement. The Qualitative Data Repository ( https://qdr.syr.edu/ ) has many resources regarding qualitative data sharing, including how to manage access as necessary, which helps to balance participant privacy and researcher access. Note, issues around sensitive and restricted data sharing can also apply to quantitative data.

As an editor it is important to find the balance between adopting “easy” open science initiatives and thinking critically about whether and how these initiatives apply to your journal's particular field and/or methodologies. Guidelines need to “clearly articulate the kinds of research to which they apply” [ 97 ]. It can also be helpful to highlight initiatives that your journal has actively decided not to adopt, along with the reasons for this decision, for full transparency. Stakeholder engagement around proposed initiatives is important, especially if editors are worried about the reception or that they may be inadvertently burdening certain types of authors [ 45 , 110 ].

More generally, we recognize that many factors will affect the logistics of adopting different initiatives. In particular, the administrative structure of a journal will be paramount in determining capacity for open science initiatives. Journal staffing may comprise any combination of editor-in-chief, associate editor(s), managing editor, or editorial assistant(s), and even roles like data editor, methodological transparency editors (see https://www.sree.org/research-transparency ), and open science advisors. Individuals in these roles will have varying levels of the necessary experience, education, capabilities, expertise, and time allocated to create and implement open science initiatives. When developing new open science initiatives, editors should consider whether and how the existing administrative structure of their journal can support their implementation. They may also consider providing training and professional development for current staff, hiring new staff with appropriate qualifications, and/or creating opportunities for interested individuals to contribute (for example positions on an open science committee).

The institutional structure of the journal (whether it is associated with an academic society, a large publisher, both, or none) may also impact the creation and implementation of open science initiatives. Independent journals may find it easier to make changes to their policies as they do not need approval of publishers or scientific societies that sponsor the journals, but they may also have fewer financial resources for implementation. Some academic societies (for example the American Psychological Association: https://www.apa.org/pubs/journals/resources/open-science ) and publishers (for example University of California Press) have encouraged open science policies and practices, and may offer support such as model policy language, ready-to-go widgets added to submission platforms, article templates with dedicated sections for disclosing open science practices, and access to paid platforms for data and code sharing. Journals affiliated with an academic society may have more hurdles to approve new initiatives compared to non-affiliated journals; however, being affiliated with a society might also make it harder to discontinue such initiatives, which is useful for making long-term change that is not limited only to the current editorial cohort. Indeed, editorial turnover can be a major barrier to not only maintaining the open science initiatives that were implemented, but also to ensuring that the initiatives are being carried out by people with sufficient background knowledge and motivation to oversee their effectiveness.

Changing funder requirements (for example Plan S from cOAlition S: https://www.coalition-s.org ) and government regulations (for example the United States Office of Science and Technology Policy “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research”) will have an impact on how editors may weigh the importance of adopting some of these initiatives. For example, with many funders requiring that outputs must be open access, editors may need to assess their journal’s current open access options if they wish to still remain a credible outlet for research in their field. In addition, technological innovations will make certain initiatives easier to adopt. For example, as cloud-based solutions for sharing data and code packages become the norm, the difficulty of reproducing analyses should decrease, which will in turn lessen the time and financial burden for a journal implementing pre-publication verification.

It is important to acknowledge that the technological landscape in publishing is ever-evolving. There will be topics we have not included that may become very important areas of social science journal editing in the near future. For example, the impact of large language models (LLMs, such as ChatGPT and Google’s Bard) on scholarly publishing is just beginning to be discussed [ 111 , 112 , 113 ], and yet ChatGPT has already been listed as an author on several research papers [ 114 ]. Cacciamani et al. are working together with academic and publishing regulatory organizations to develop guidelines for accountable reporting and use of LLMs in scientific research. Until more detailed guidelines are available, journals may emphasize that the use of LLMs should be declared on submission [ 111 ] and that ultimately authors are always responsible for the content of their submissions [ 113 ]. It is possible these tools could aid inclusivity by allowing researchers to easily improve the language used in their manuscripts before submission (a task that currently adds a large burden for non-native speakers of English, see [ 115 ]). It is also possible that LLMs could be used to facilitate the editing process (by, for example, highlighting key points that come up from multiple reviewers of the same paper). Future iterations of our full guide are likely to include a dedicated section on LLMs once further guidance is available.

This guide provides an overview to help editors make informed decisions about the options available for increasing openness and transparency at their journal. In our discussion of increasing openness in science, we have focused on journals because this is a guide for journal editors . However, there are several other stakeholders who wield power to implement open science initiatives – including but not limited to – funders, research institutions, academic societies, and scholarly communication organizations such as publishers, preprint servers, and data repositories. Even sticking within scientific publishing, many other approaches may be impactful, ranging from incremental innovations within the current system to completely revolutionizing scientific knowledge dissemination. For example, some people argue for journals to play only a curatorial role, even without making accept or reject decisions [ 116 ]. Others argue that articles should be published and reviewed on preprint servers, and dedicated preprint peer review services should decide whether articles do or do not deserve to receive an endorsement or recommendation [ 117 ]. At the most “extreme” end of this spectrum, some argue that journals are unnecessary altogether (for example see The Unjournal : https://globalimpact.gitbook.io/the-unjournal-project-and-communication-space/ ).

Our guide keeps within the “incremental innovation” section of this spectrum, although some entries in the full guide are more disruptive to the current publishing model (for example breaking off from a traditional publisher and starting an independent journal). While advocating for a scientific knowledge dissemination revolution is beyond the scope of this article, we encourage readers to reflect on the scientific values underpinning our guidance and what an alternate scientific knowledge dissemination landscape could look like that is fully in line with these values. Editors should pursue options that make the most sense for their communities, taking into account the logistics of adopting different policies and practices, including the journal's scope, set up, and financial resources. Editors have an important role to play in the adoption of open science, in which they are supported by this abbreviated guide, the full guide [ 62 ], and the JEDI community.

Buckwalter W. The replication crisis and philosophy. PhiMiSci. 2022;3. Available from: https://philosophymindscience.org/index.php/phimisci/article/view/9193 . Cited 2023 May 19.

Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013;14(5):365–76.

Article   CAS   PubMed   Google Scholar  

Cook BG. A Call for Examining Replication and Bias in Special Education Research. Remedial Special Educ. 2014;35(4):233–46.

Article   Google Scholar  

Farrar BG, Vernouillet A, Garcia-Pelegrin E, Legg E, Brecht K, Lambert P, et al. Reporting and interpreting non-significant results in animal cognition research. 2022.

Book   Google Scholar  

Ioannidis JPA. Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci. 2012;7(6):645–54.

Article   PubMed   Google Scholar  

Ioannidis J, Doucouliagos C. What’s to know about the credibility of empirical economics?: Scientific credibility of economics. J Econ Surv. 2013;27(5):997–1004.

Wright BE. The Science of Public Administration: Problems, Presumptions, Progress, and Possibilities. Public Admin Rev. 2015;75(6):795–805.

Smaldino PE, McElreath R. The natural selection of bad science. Royal Society Open Science. 2016;3(9):160384.

Article   MathSciNet   PubMed   PubMed Central   ADS   Google Scholar  

Smaldino PE, Turner MA, Kallens PAC. Open science and modified funding lotteries can impede the natural selection of bad science. Royal Soc Open Sci. 2019;6(6):190194.

Article   ADS   Google Scholar  

Nosek BA, Spies JR, Motyl M. Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspect Psychol Sci. 2012;7(6):615–31.

Article   PubMed   PubMed Central   Google Scholar  

UNESCO. UNESCO Recommendation on Open Science. 2021. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000379949.locale=en . Accessed 12 Dec.

Azevedo F, Parsons S, Micheli L, Strand J, Rinke EM, Guay S, et al. Introducing a Framework for Open and Reproducible Research Training (FORRT). OSF Preprints. 2019.

Parsons S, Azevedo F, Elsherif MM, Guay S, Shahim ON, Govaart GH, et al. A community-sourced glossary of open scholarship terms. Nat Hum Behav. 2022;6:312–8.

Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Promoting an open research culture. Science. 2015;348(6242):1422–5.

Article   CAS   PubMed   PubMed Central   ADS   Google Scholar  

Levenstein MC, Lyle JA. Data: Sharing Is Caring. Adv Methods Pract Psychol Sci. 2018;1(1):95–103.

Nosek BA, Hardwicke TE, Moshontz H, Allard A, Corker KS, Dreber A, et al. Replicability, Robustness, and Reproducibility in Psychological Science. Annu Rev Psychol. 2022;73:719–48.

Collins F, Morgan M, Patrinos A. The Human Genome Project: lessons from large-scale biology. (Viewpoint) (Special Section). Science. 2023;300(5617):286–90.

Errington TM, Denis A, Perfito N, Iorns E, Nosek BA. Challenges for assessing replicability in preclinical cancer biology. eLife. 2021;10:e67995.

Farrar BG, Voudouris K, Clayton N. Replications, Comparisons, Sampling and the Problem of Representativeness in Animal Cognition Research. PsyArXiv; 2020. Available from: https://osf.io/2vt4k . Cited 2023 May 19.

Christensen G, Miguel E. Transparency, Reproducibility, and the Credibility of Economics Research. J Econ Lit. 2018;56(3):920–80.

Delios A, Clemente EG, Wu T, Tan H, Wang Y, Gordon M, et al. Examining the generalizability of research findings from archival data. Proc Natl Acad Sci USA. 2022;119(30):e2120377119.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Miguel E, Camerer C, Casey K, Cohen J, Esterling KM, Gerber A, et al. Promoting Transparency in Social Science Research. Science. 2014;343(6166):30–1.

Tierney W, Hardy JH, Ebersole CR, Leavitt K, Viganola D, Clemente EG, et al. Creative destruction in science. Organ Behav Hum Decis Process. 2020;161:291–309.

Tierney W, Hardy J, Ebersole CR, Viganola D, Clemente EG, Gordon M, et al. A creative destruction approach to replication: Implicit work and sex morality across cultures. J Exp Soc Psychol. 2021;93:104060.

Makel MC, Plucker JA. Facts Are More Important Than Novelty: Replication in the Education Sciences. Educ Res. 2014;43(6):304–16.

Cook BG, Lloyd JW, Mellor D, Nosek BA, Therrien WJ. Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education. Except Child. 2018;85(1):104–18.

Gehlbach H, Robinson CD. Mitigating Illusory Results through Preregistration in Education. J Res Educ Effect. 2018;11(2):296–315.

Google Scholar  

McBee MT, Makel MC, Peters SJ, Matthews MS. A Call for Open Science in Giftedness Research. Gifted Child Quarterly. 2018;62(4):374–88.

Fleming JI, Wilson SE, Hart SA, Therrien WJ, Cook BG. Open accessibility in education research: Enhancing the credibility, equity, impact, and efficiency of research. Educ Psychol. 2021;56(2):110–21.

Lupia A, Elman C. Openness in Political Science: Data Access and Research Transparency: Introduction. PS, Pol Sci Politics. 2014;47(1):19–42.

Harris JK, Johnson KJ, Carothers BJ, Combs TB, Luke DA, Wang X. Use of reproducible research practices in public health: A survey of public health analysts. Gilligan C, editor. PLoS ONE. 2018;13(9):e0202447.

Peng RD, Hicks SC. Reproducible Research: A Retrospective. Annu Rev Public Health. 2021;42(1):79–93.

Maienschein J, Parker JN, Laubichler M, Hackett EJ. Data Management and Data Sharing in Science and Technology Studies. Sci Technol Human Values. 2019;44(1):143–60.

Bornmann L, Guns R, Thelwall M, Wolfram D. Which aspects of the Open Science agenda are most relevant to scientometric research and publishing? An opinion paper. Quant Sci Stud. 2021;2(2):438–53.

Freese J. Replication Standards for Quantitative Social Science: Why Not Sociology? Sociol Methods Res. 2007;36(2):153–72.

Article   MathSciNet   Google Scholar  

Freese J, King MM. Institutionalizing Transparency. Socius. 2018;1(4):237802311773921.

Rahal RM, Hamann H, Brohmer H, Pethig F. Sharing the Recipe: Reproducibility and Replicability in Research Across Disciplines. RIO. 2022;22(8):e89980.

Korbmacher M, Azevedo F, Pennington CR, Hartmann H, Pownall M, ..., et al. The replication crisis has led to positive structural, procedural, and community changes. MetaArXiv. 2023.

Michie S, van Stralen MM, West R. The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Sci. 2011;6(1):42.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implementation Sci. 2017;12(1):77.

Norris E, O’Connor DB. Science as behaviour: Using a behaviour change approach to increase uptake of open science. Psychol Health. 2019;34(12):1397–406.

Norris E, Munafo MR, Jay C, Baldwin J, Lautarescu A, ..., et al. Awareness of and engagement with Open Research behaviours: Development of the Brief Open Research Survey (BORS) with the UK Reproducibility Network. MetaArXiv. 2022.

Naaman K, Grant S, Kianersi S, Supplee L, Henschel B, Mayo-Wilson E. Exploring enablers and barriers to implementing the Transparency and Openness Promotion (TOP) Guidelines: A theory-based survey of journal editors. MetaArXiv. 2022.

Evans TR, Pownall M, Collins E, Henderson EL, Pickering JS, O’Mahony A, et al. A network of change: united action on research integrity. BMC Res Notes. 2022;15(1):141.

Stewart SLK, Pennington CR, Da Silva GR, Ballou N, Butler J, Dienes Z, et al. Reforms to improve reproducibility and quality must be coordinated across the research ecosystem: the view from the UKRN Local Network Leads. BMC Res Notes. 2022;15(1):58.

Elman C, Kapiszewski D, Lupia A. Transparent Social Inquiry: Implications for Political Science. Annu Rev Polit Sci. 2018;21(1):29–47.

Aalbersberg I, Appleyard T, Brookhart S, Carpenter T, Clarke M, Curry S, et al. Making Science Transparent By Default; Introducing the TOP Statement. OSF Preprints. 2018.

Mayo-Wilson E, Grant S, Supplee L, Kianersi S, Amin A, DeHaven A, et al. Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices. Res Integr Peer Rev. 2021;6(1):9.

Grant S, Mayo-Wilson E, Kianersi S, Naaman K, Henschel B. Open Science Standards at Journals that Inform Evidence-Based Policy. Prev Sci. 2023;24:1275–91.

Kathawalla UK, Silverstein P, Syed M. Easing Into Open Science: A Guide for Graduate Students and Their Advisors. Collabra Psychol. 2021;7(1):18684.

Montoya AK, Krenzer WLD, Fossum JL. Opening the Door to Registered Reports: Census of Journals Publishing Registered Reports (2013–2020). Collabra Psychol. 2021;7(1):24404.

TARG Meta-Research Group & Collaborators, Thibault RT, Clark R, Pedder H, van den Akker O, Westwood S, et al. Estimating the prevalence of discrepancies between study registrations and publications: A systematic review and meta-analyses. medRxiv. 2021.

Chambers C, Dunn A. Rapidly reviewing Registered Reports: A retrospective. Blog posts and articles from the Royal Society. 2022. Available from: https://royalsociety.org/blog/2022/09/registered-reports/ . Accessed 12 Dec.

Scheel AM, Schijen MRMJ, Lakens D. An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports. Adv Methods Pract Psychol Sci. 2021;4(2):1–12.

Hummer L, Thorn FS, Nosek BA, Errington TM. Evaluating Registered Reports: A Naturalistic Comparative Study of Article Impact. OSF Preprints. 2017.

Soderberg CK, Errington TM, Schiavone SR, Bottesini J, Thorn FS, Vazire S, et al. Initial evidence of research quality of registered reports compared with the standard publishing model. Nat Hum Behav. 2021;5(8):990–7.

Chambers CD, Tzavella L. The past, present and future of Registered Reports. Nat Hum Behav. 2022;6(1):29–42.

Nosek BA, Lakens D. Registered Reports: A Method to Increase the Credibility of Published Results. Social Psychology. 2014;45(3):137–41.

Karhulahti VM. Registered reports for qualitative research. Nat Hum Behav. 2022;6(1):4–5.

Bergmann C. The Buffet Approach to Open Science. CogTales. 2023. Available from: https://cogtales.wordpress.com/2023/04/16/the-buffet-approach-to-open-science/ . Accessed 12 Dec.

Komssi M, Pichlis D, Raatikainen M, Kindstrom K, Jarvinen J. What are Hackathons for? IEEE Softw. 2015;32(5):60–7.

Silverstein P, Elman C, Montoya AK, McGillivray B, Pennington CR, Harrison CH, et al. A Guide for Social Science Journal Editors on Easing into Open Science (FULL GUIDE). OSF Preprints. 2023.

Vazire S. Quality Uncertainty Erodes Trust in Science. Collabra Psychol. 2017;3(1):1.

Crüwell S, Apthorp D, Baker BJ, Colling L, Elson M, Geiger SJ, et al. What’s in a Badge? A Computational Reproducibility Investigation of the Open Data Badge Policy in One Issue of Psychological Science. Psychol Sci. 2023;34(4):513–22.

Gabelica M, Bojčić R, Puljak L. Many researchers were not compliant with their published data sharing statement: a mixed-methods study. J Clin Epidemiol. 2022;150:33–41.

Stodden V, Seiler J, Ma Z. An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci USA. 2018;115(11):2584–9.

Rice DB, Moher D. Curtailing the Use of Preregistration : A Misused Term. Perspect Psychol Sci. 2019;14(6):1105–8.

Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S, Falkenberg LS, et al. Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. Macleod MR, editor. PLoS Biol. 2016;14(5):e1002456.

Haven TL, Errington TM, Gleditsch KS, van Grootel L, Jacobs AM, Kern FG, et al. Preregistering Qualitative Research: A Delphi Study. Int J Qual Methods. 2020;1(19):1609406920976417.

Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci. 2018;115(11):2600–6.

Pennington CR. A student’s guide to open science: Using the replication crisis to reform psychology. Maidenhead: Open University Press; 2023.

Karhulahti VM, Backe HJ. Transparency of peer review: a semi-structured interview study with chief editors from social sciences and humanities. Res Integr Peer Rev. 2021;6(1):13.

Ross-Hellauer T, Horbach SPJM. ‘Conditional Acceptance’ (additional experiments required): A scoping review of recent evidence on key aspects of Open Peer Review. MetaArXiv. 2022.

Vazire S. Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspect Psychol Sci. 2018;13(4):411–7.

Nosek BA, Errington TM. What is replication? PLoS Biol. 2020;18(3):e3000691.

Koole SL, Lakens D. Rewarding Replications: A Sure and Simple Way to Improve Psychological Science. Perspect Psychol Sci. 2012;7(6):608–14.

Wong PT. Implicit editorial policies and the integrity of psychology as an empirical science. Am Psychol. 1981;36(6):690–1.

Srivastava S. A Pottery Barn rule for scientific journals. The Hardest Science. 2012. Available from: https://thehardestscience.com/2012/09/27/a-pottery-barn-rule-for-scientific-journals/ . Accessed 12 Dec.

Vazire S, Holcombe AO. Where Are the Self-Correcting Mechanisms in Science? Rev Gen Psychol. 2021;26(2):212–23.

COPE Council. COPE Retraction guidelines — English. 2019. Available from: https://doi.org/10.24318/cope.2019.1.4 .

Hardwicke TE, Thibault RT, Kosie JE, Tzavella L, Bendixen T, Handcock SA, et al. Post-publication critique at top-ranked journals across scientific disciplines: A cross-sectional assessment of policies and practice. Royal Soc Open Sci. 2022;9(8).

Moshontz H, Binion G, Walton H, Brown BT, Syed M. A Guide to Posting and Managing Preprints. Adv Methods Pract Psychol Sci. 2021;4(2):1–11.

Ortega RP. Science’s English dominance hinders diversity, but the community can work toward change. Science. 2020.

Bahlai C, Bartlett LJ, Burgio KR, Fournier AMV, Keiser CN, Poisot T, et al. Open Science Isn’t Always Open to All Scientists. Am Sci. 2019;107(2):78.

Bennett EA. Open Science From a Qualitative, Feminist Perspective: Epistemological Dogmas and a Call for Critical Examination. Psychol Women Q. 2021;45(4):448–56.

Elsherif M, Middleton S, Phan JM, Azevedo F, Iley B, ..., et al. Bridging Neurodiversity and Open Scholarship: How Shared Values Can Guide Best Practices for Research Integrity, Social Justice, and Principled Education. MetaArXiv. 2022.

Puthillam A, Doble LJM, Santos JJID, Elsherif MM, Steltenpohl CN, Moreau D, et al. Guidelines to improve internationalization in the psychological sciences. Soc Pers Psychol Compass. 2023;e12847.

Ross-Hellauer T. Open science, done wrong, will compound inequities. Nature. 2022;603(363):363.

Whitaker K, Guest O. #bropenscience is broken science. The Psychologist. 2020. Available from: https://www.bps.org.uk/psychologist/bropenscience-broken-science . Accessed 12 Dec.

Huber J, Inoua S, Kerschbamer R, König-Kersting C, Palan S, Smith VL. Nobel and novice: Author prominence affects peer review. University of Graz, School of Business, Economics and Social Sciences Working Paper. 2022.

Steltenpohl CN. To Sign or Not to Sign. 2020. Available from: https://cnsyoung.com/to-sign-or-not-to-sign/ . Accessed 12 Dec.

Tomkins A, Zhang M, Heavlin WD. Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci USA. 2017;114(48):12708–13.

Waltman L, Kaltenbrunner W, Pinfield S, Woods HB. How to improve scientific peer review: Four schools of thought. Learned Publishing. 2023;36:334–47.  https://doi.org/10.1002/leap.1544 .

Boulton D, Hammersley M. Analysis of Unstructured Data. In: Data Collection and Analysis. 2nd ed. London: SAGE Publications Ltd; 2006. p. 243–59. Available from: https://doi.org/10.4135/9781849208802 .

Bennett A, Checkel JT. Process Tracing: From Metaphor to Analytic Tool. Cambridge: Cambridge University Press; 2014.

Ragin CC. The Comparative Method: Moving beyond Qualitative and Quantitative Strategies. California: University of California Press; 1987.

Steltenpohl CN, Lustick H, Meyer MS, Lee LE, Stegenga SM, Standiford Reyes L, et al. Rethinking Transparency and Rigor from a Qualitative Open Science Perspective. JOTE. 2023. Available from: https://journal.trialanderror.org/pub/rethinking-transparency . Cited 2023 Jun 8.

Adler JM, Singer JA. Psychobiographies of social change agents: Introduction to the Special Issue. J Pers. 2023;91(1):5–13.

Jacobs A. Pre-registration and Results-Free Review in Observational and Qualitative Research. In: The Production of Knowledge: Enhancing Progress in Social Science. Cambridge: Cambridge University Press; 2020.

Jafar AJN. What is positionality and should it be expressed in quantitative studies? Emerg Med J. 2018;35(5):323.

PubMed   Google Scholar  

Jamieson MK, Govaart GH, Pownall M. Reflexivity in quantitative research: A rationale and beginner’s guide. Soc Pers Psych. 2023;17(4):e12735.

Makel MC, Plucker JA, Hegarty B. Replications in Psychology Research: How Often Do They Really Occur? Perspect Psychol Sci. 2012;7(6):537–42.

Pownall M. Is replication possible for qualitative research? PsyArXiv. 2022.

Devezer B, Nardin LG, Baumgaertner B, Buzbas EO. Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. Fanelli D, editor. PLoS ONE. 2019;14(5):e0216125.

DuBois JM, Strait M, Walsh H. Is it time to share qualitative research data? Qualitative Psychology. 2018;5(3):380–93.

Jones K, Alexander SM, et al. Qualitative data sharing and re-use for socio-environmental systems research: A synthesis of opportunities, challenges, resources and approaches. SESYNC White Paper; 2018. Available from: https://doi.org/10.13016/M2WH2DG59 .

Tsai AC, Kohrt BA, Matthews LT, Betancourt TS, Lee JK, Papachristos AV, et al. Promises and pitfalls of data sharing in qualitative research. Soc Sci Med. 2016;169:191–8.

Joel S, Eastwick PW, Finkel EJ. Open Sharing of Data on Close Relationships and Other Sensitive Social Psychological Topics: Challenges, Tools, and Future Directions. Adv Methods Pract Psychol Sci. 2018;1(1):86–94.

Casadevall A, Enquist L, Imperiale MJ, Keim P, Osterholm MT, Relman DA. Redaction of Sensitive Data in the Publication of Dual Use Research of Concern. mBio. 2013;5(1):1–2.

Christian TM, Gooch A, Vision T, Hull E. Journal data policies: Exploring how the understanding of editors and authors corresponds to the policies themselves. Sugimoto CR, editor. PLoS ONE. 2020;15(3):e0230281.

Cacciamani GE, Collins GS, Gill IS. ChatGPT: standard reporting guidelines for responsible use. Nature. 2023;618(238).

Hosseini M, Horbach SPJM. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res Integr Peer Rev. 2023;8(1):4.

Nature. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613(7945):612–612.

Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620–1.

Article   CAS   PubMed   ADS   Google Scholar  

Amano T, Ramírez-Castañeda V, Berdejo-Espinola V, Borokini I, Chowdhury S, Golivets M, et al. The manifold costs of being a non-native English speaker in science. Dirnagl U, editor. PLoS Biol. 2023;21(7):e3002184.

Eisen MB, Akhmanova A, Behrens TE, Diedrichsen J, Harper DM, Iordanova MD, et al. Peer review without gatekeeping. eLife. 2022;20(11):e83889.

Avissar-Whiting M, Belliard F, Bertozzi SM, Brand A, Brown K, Clément-Stoneham G, et al. Advancing the culture of peer review with preprints. OSF Preprints. 2023.

Download references

Acknowledgements

We thank Diana Kapiszewski for portions of text from a grant proposal that informed some of this manuscript, Chris Hartgerink for their comments on the manuscript and full guide, and Maitreyee Shilpa Kishor for help formatting the full guide.

Author information

Authors and affiliations.

Department of Psychology, Ashland University, Ashland, USA

Priya Silverstein & Kathleen Schmidt

Institute for Globally Distributed Open Research and Education, Preston, UK

Priya Silverstein

Maxwell School of Citizenship and Public Affairs, Syracuse University, Syracuse, USA

Colin Elman & Julia G. Bottesini

Department of Psychology, University of California, Los Angeles, USA

Amanda Montoya

Department of Digital Humanities, King’s College London, London, UK

Barbara McGillivray

School of Psychology, College of Health & Life Sciences, Aston University, Birmingham, UK

Charlotte R. Pennington

Department of Government, Harvard University, Cambridge, USA

Chase H. Harrison

Dartmouth Center for Program Design and Evaluation, Hanover, USA

Crystal N. Steltenpohl

Department of Psychology and Psychotherapy, Witten/Herdecke University, Witten, Germany

Jan Philipp Röer

Department of Psychology, Grand Valley State University, Allendale, USA

Katherine S. Corker

American Family Insurance Data Science Institute, University of Wisconsin-Madison, Madison, USA

Lisa M. Charron

Nelson Institute for Environmental Studies, University of Wisconsin-Madison, Madison, USA

Department of Psychology, University of Birmingham, Birmingham, UK

Mahmoud Elsherif

Meta-Research Innovation Center at Stanford, Stanford University, Stanford, USA

Mario Malicki

Stanford Program On Research Rigor and Reproducibility, Stanford University, Stanford, USA

Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, USA

Department of Linguistics, University of Utah, Salt Lake City, USA

Rachel Hayes-Harb

Department of Psychology, University of Graz, Graz, Austria

Sandra Grinschgl

Department of Psychology, Iowa State University, Ames, USA

School of Social & Behavioral Sciences, Arizona State University, Tempe, USA

School of Human Sciences and Institute for Lifecourse Development, University of Greenwich, London, UK

Thomas Rhys Evans

Department of Music, Art and Culture Studies, University of Jyväskylä, Jyväskylä, Finland

Veli-Matti Karhulahti

Office of Scientific Integrity, Duke University, Durham, USA

William L. D. Krenzer

National Agency for Scientific and Technological Promotion, Córdoba, Argentina

Anabel Belaus

School of Psychology and Centre for Brain Research, University of Auckland, Auckland, New Zealand

David Moreau

Facultad de Psicología, Universidad de Buenos Aires, Buenos Aires, Argentina

Debora I. Burin

CONICET, Buenos Aires, Argentina

ArtCenter College of Design, Pasadena, USA

Elizabeth Chin

Faculty of Applied Sciences, Delft University of Technology, Delft, Netherlands

Esther Plomp

The, The Alan Turing Institute, Turing Way, London, UK

Department of Epidemiology, UNC Gillings School of Global Public Health, Chapel Hill, USA

Evan Mayo-Wilson

Inter-University Consortium for Political and Social Research (ICPSR), University of Michigan, Ann Arbor, USA

Jared Lyle & Kyrani Reneau

Olin College of Engineering, Needham, USA

Jonathan M. Adler

Department of Psychology, Rhodes College, Memphis, USA

Katherine M. Lawson

Economics Department, Cornell University, Ithaca, USA

Lars Vilhuber

Centre for Science and Technology Studies, Leiden University, Leiden, Netherlands

Ludo Waltman

Department of Psychology, University of Wisconsin-Madison, Madison, USA

Morton Ann Gernsbacher

Department of Psychology, Tufts University, Medford, USA

Paul E. Plonski

Department of Psychology, University of Cambridge, Cambridge, USA

Sakshi Ghai

HEDCO Institute for Evidence-Based Practice, College of Education, University of Oregon, Eugene, USA

Odum Institute for Research in Social Science, University of North Carolina at Chapel Hill, Chapel Hill, USA

Thu-Mai Christian

Institute of Mind and Biology, University of Chicago, Chicago, USA

William Ngiam

Department of Psychology, University of Chicago, Chicago, USA

Department of Psychology, University of Minnesota, Minneapolis, USA

You can also search for this author in PubMed   Google Scholar

Contributions

CRediT author statement: Priya Silverstein, Moin Syed, and Colin Elman conceptualized this manuscript. Priya Silverstein, Moin Syed, and Colin Elman led on writing (original draft). Priya Silverstein led on writing (review and editing). Authors listed from Amanda Montoya to William L. D. Krenzer contributed to both original draft and review and editing. Authors listed from Anabel Belaus to William Ngiam contributed to either writing (original draft) or writing (review & editing).

Corresponding author

Correspondence to Priya Silverstein .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Mario Malicki is the editor-in-chief of RIPR. He was not involved in the review or editorial decision making for this paper. This material is based in part upon work supported by the National Science Foundation under Award No. 2032661. No views expressed here reflect individual journal policies or priorities.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Silverstein, P., Elman, C., Montoya, A. et al. A guide for social science journal editors on easing into open science. Res Integr Peer Rev 9 , 2 (2024). https://doi.org/10.1186/s41073-023-00141-5

Download citation

Received : 12 June 2023

Accepted : 28 December 2023

Published : 16 February 2024

DOI : https://doi.org/10.1186/s41073-023-00141-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Open science
  • Journal editing
  • Scholarly publishing
  • Peer review

Research Integrity and Peer Review

ISSN: 2058-8615

how to write an article for a peer reviewed journal

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Reumatologia
  • v.59(1); 2021

Logo of reumatol

Peer review guidance: a primer for researchers

Olena zimba.

1 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine

Armen Yuri Gasparyan

2 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK

The peer review process is essential for quality checks and validation of journal submissions. Although it has some limitations, including manipulations and biased and unfair evaluations, there is no other alternative to the system. Several peer review models are now practised, with public review being the most appropriate in view of the open science movement. Constructive reviewer comments are increasingly recognised as scholarly contributions which should meet certain ethics and reporting standards. The Publons platform, which is now part of the Web of Science Group (Clarivate Analytics), credits validated reviewer accomplishments and serves as an instrument for selecting and promoting the best reviewers. All authors with relevant profiles may act as reviewers. Adherence to research reporting standards and access to bibliographic databases are recommended to help reviewers draft evidence-based and detailed comments.

Introduction

The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors’ mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for ‘elite’ research fellows who contribute to their professional societies and add value by voluntarily sharing their knowledge and experience.

Since the launch of the first academic periodicals back in 1665, the peer review has been mandatory for validating scientific facts, selecting influential works, and minimizing chances of publishing erroneous research reports [ 1 ]. Over the past centuries, peer review models have evolved from single-handed editorial evaluations to collegial discussions, with numerous strengths and inevitable limitations of each practised model [ 2 , 3 ]. With multiplication of periodicals and editorial management platforms, the reviewer pool has expanded and internationalized. Various sets of rules have been proposed to select skilled reviewers and employ globally acceptable tools and language styles [ 4 , 5 ].

In the era of digitization, the ethical dimension of the peer review has emerged, necessitating involvement of peers with full understanding of research and publication ethics to exclude unethical articles from the pool of evidence-based research and reviews [ 6 ]. In the time of the COVID-19 pandemic, some, if not most, journals face the unavailability of skilled reviewers, resulting in an unprecedented increase of articles without a history of peer review or those with surprisingly short evaluation timelines [ 7 ].

Editorial recommendations and the best reviewers

Guidance on peer review and selection of reviewers is currently available in the recommendations of global editorial associations which can be consulted by journal editors for updating their ethics statements and by research managers for crediting the evaluators. The International Committee on Medical Journal Editors (ICMJE) qualifies peer review as a continuation of the scientific process that should involve experts who are able to timely respond to reviewer invitations, submitting unbiased and constructive comments, and keeping confidentiality [ 8 ].

The reviewer roles and responsibilities are listed in the updated recommendations of the Council of Science Editors (CSE) [ 9 ] where ethical conduct is viewed as a premise of the quality evaluations. The Committee on Publication Ethics (COPE) further emphasizes editorial strategies that ensure transparent and unbiased reviewer evaluations by trained professionals [ 10 ]. Finally, the World Association of Medical Editors (WAME) prioritizes selecting the best reviewers with validated profiles to avoid substandard or fraudulent reviewer comments [ 11 ]. Accordingly, the Sarajevo Declaration on Integrity and Visibility of Scholarly Publications encourages reviewers to register with the Open Researcher and Contributor ID (ORCID) platform to validate and publicize their scholarly activities [ 12 ].

Although the best reviewer criteria are not listed in the editorial recommendations, it is apparent that the manuscript evaluators should be active researchers with extensive experience in the subject matter and an impressive list of relevant and recent publications [ 13 ]. All authors embarking on an academic career and publishing articles with active contact details can be involved in the evaluation of others’ scholarly works [ 14 ]. Ideally, the reviewers should be peers of the manuscript authors with equal scholarly ranks and credentials.

However, journal editors may employ schemes that engage junior research fellows as co-reviewers along with their mentors and senior fellows [ 15 ]. Such a scheme is successfully practised within the framework of the Emerging EULAR (European League Against Rheumatism) Network (EMEUNET) where seasoned authors (mentors) train ongoing researchers (mentees) how to evaluate submissions to the top rheumatology journals and select the best evaluators for regular contributors to these journals [ 16 ].

The awareness of the EQUATOR Network reporting standards may help the reviewers to evaluate methodology and suggest related revisions. Statistical skills help the reviewers to detect basic mistakes and suggest additional analyses. For example, scanning data presentation and revealing mistakes in the presentation of means and standard deviations often prompt re-analyses of distributions and replacement of parametric tests with non-parametric ones [ 17 , 18 ].

Constructive reviewer comments

The main goal of the peer review is to support authors in their attempt to publish ethically sound and professionally validated works that may attract readers’ attention and positively influence healthcare research and practice. As such, an optimal reviewer comment has to comprehensively examine all parts of the research and review work ( Table I ). The best reviewers are viewed as contributors who guide authors on how to correct mistakes, discuss study limitations, and highlight its strengths [ 19 ].

Structure of a reviewer comment to be forwarded to authors

Some of the currently practised review models are well positioned to help authors reveal and correct their mistakes at pre- or post-publication stages ( Table II ). The global move toward open science is particularly instrumental for increasing the quality and transparency of reviewer contributions.

Advantages and disadvantages of common manuscript evaluation models

Since there are no universally acceptable criteria for selecting reviewers and structuring their comments, instructions of all peer-reviewed journal should specify priorities, models, and expected review outcomes [ 20 ]. Monitoring and reporting average peer review timelines is also required to encourage timely evaluations and avoid delays. Depending on journal policies and article types, the first round of peer review may last from a few days to a few weeks. The fast-track review (up to 3 days) is practised by some top journals which process clinical trial reports and other priority items.

In exceptional cases, reviewer contributions may result in substantive changes, appreciated by authors in the official acknowledgments. In most cases, however, reviewers should avoid engaging in the authors’ research and writing. They should refrain from instructing the authors on additional tests and data collection as these may delay publication of original submissions with conclusive results.

Established publishers often employ advanced editorial management systems that support reviewers by providing instantaneous access to the review instructions, online structured forms, and some bibliographic databases. Such support enables drafting of evidence-based comments that examine the novelty, ethical soundness, and implications of the reviewed manuscripts [ 21 ].

Encouraging reviewers to submit their recommendations on manuscript acceptance/rejection and related editorial tasks is now a common practice. Skilled reviewers may prompt the editors to reject or transfer manuscripts which fall outside the journal scope, perform additional ethics checks, and minimize chances of publishing erroneous and unethical articles. They may also raise concerns over the editorial strategies in their comments to the editors.

Since reviewer and editor roles are distinct, reviewer recommendations are aimed at helping editors, but not at replacing their decision-making functions. The final decisions rest with handling editors. Handling editors weigh not only reviewer comments, but also priorities related to article types and geographic origins, space limitations in certain periods, and envisaged influence in terms of social media attention and citations. This is why rejections of even flawless manuscripts are likely at early rounds of internal and external evaluations across most peer-reviewed journals.

Reviewers are often requested to comment on language correctness and overall readability of the evaluated manuscripts. Given the wide availability of in-house and external editing services, reviewer comments on language mistakes and typos are categorized as minor. At the same time, non-Anglophone experts’ poor language skills often exclude them from contributing to the peer review in most influential journals [ 22 ]. Comments should be properly edited to convey messages in positive or neutral tones, express ideas of varying degrees of certainty, and present logical order of words, sentences, and paragraphs [ 23 , 24 ]. Consulting linguists on communication culture, passing advanced language courses, and honing commenting skills may increase the overall quality and appeal of the reviewer accomplishments [ 5 , 25 ].

Peer reviewer credits

Various crediting mechanisms have been proposed to motivate reviewers and maintain the integrity of science communication [ 26 ]. Annual reviewer acknowledgments are widely practised for naming manuscript evaluators and appreciating their scholarly contributions. Given the need to weigh reviewer contributions, some journal editors distinguish ‘elite’ reviewers with numerous evaluations and award those with timely and outstanding accomplishments [ 27 ]. Such targeted recognition ensures ethical soundness of the peer review and facilitates promotion of the best candidates for grant funding and academic job appointments [ 28 ].

Also, large publishers and learned societies issue certificates of excellence in reviewing which may include Continuing Professional Development (CPD) points [ 29 ]. Finally, an entirely new crediting mechanism is proposed to award bonus points to active reviewers who may collect, transfer, and use these points to discount gold open-access charges within the publisher consortia [ 30 ].

With the launch of Publons ( http://publons.com/ ) and its integration with Web of Science Group (Clarivate Analytics), reviewer recognition has become a matter of scientific prestige. Reviewers can now freely open their Publons accounts and record their contributions to online journals with Digital Object Identifiers (DOI). Journal editors, in turn, may generate official reviewer acknowledgments and encourage reviewers to forward them to Publons for building up individual reviewer and journal profiles. All published articles maintain e-links to their review records and post-publication promotion on social media, allowing the reviewers to continuously track expert evaluations and comments. A paid-up partnership is also available to journals and publishers for automatically transferring peer-review records to Publons upon mutually acceptable arrangements.

Listing reviewer accomplishments on an individual Publons profile showcases scholarly contributions of the account holder. The reviewer accomplishments placed next to the account holders’ own articles and editorial accomplishments point to the diversity of scholarly contributions. Researchers may establish links between their Publons and ORCID accounts to further benefit from complementary services of both platforms. Publons Academy ( https://publons.com/community/academy/ ) additionally offers an online training course to novice researchers who may improve their reviewing skills under the guidance of experienced mentors and journal editors. Finally, journal editors may conduct searches through the Publons platform to select the best reviewers across academic disciplines.

Peer review ethics

Prior to accepting reviewer invitations, scholars need to weigh a number of factors which may compromise their evaluations. First of all, they are required to accept the reviewer invitations if they are capable of timely submitting their comments. Peer review timelines depend on article type and vary widely across journals. The rules of transparent publishing necessitate recording manuscript submission and acceptance dates in article footnotes to inform readers of the evaluation speed and to help investigators in the event of multiple unethical submissions. Timely reviewer accomplishments often enable fast publication of valuable works with positive implications for healthcare. Unjustifiably long peer review, on the contrary, delays dissemination of influential reports and results in ethical misconduct, such as plagiarism of a manuscript under evaluation [ 31 ].

In the times of proliferation of open-access journals relying on article processing charges, unjustifiably short review may point to the absence of quality evaluation and apparently ‘predatory’ publishing practice [ 32 , 33 ]. Authors when choosing their target journals should take into account the peer review strategy and associated timelines to avoid substandard periodicals.

Reviewer primary interests (unbiased evaluation of manuscripts) may come into conflict with secondary interests (promotion of their own scholarly works), necessitating disclosures by filling in related parts in the online reviewer window or uploading the ICMJE conflict of interest forms. Biomedical reviewers, who are directly or indirectly supported by the pharmaceutical industry, may encounter conflicts while evaluating drug research. Such instances require explicit disclosures of conflicts and/or rejections of reviewer invitations.

Journal editors are obliged to employ mechanisms for disclosing reviewer financial and non-financial conflicts of interest to avoid processing of biased comments [ 34 ]. They should also cautiously process negative comments that oppose dissenting, but still valid, scientific ideas [ 35 ]. Reviewer conflicts that stem from academic activities in a competitive environment may introduce biases, resulting in unfair rejections of manuscripts with opposing concepts, results, and interpretations. The same academic conflicts may lead to coercive reviewer self-citations, forcing authors to incorporate suggested reviewer references or face negative feedback and an unjustified rejection [ 36 ]. Notably, several publisher investigations have demonstrated a global scale of such misconduct, involving some highly cited researchers and top scientific journals [ 37 ].

Fake peer review, an extreme example of conflict of interest, is another form of misconduct that has surfaced in the time of mass proliferation of gold open-access journals and publication of articles without quality checks [ 38 ]. Fake reviews are generated by manipulating authors and commercial editing agencies with full access to their own manuscripts and peer review evaluations in the journal editorial management systems. The sole aim of these reviews is to break the manuscript evaluation process and to pave the way for publication of pseudoscientific articles. Authors of these articles are often supported by funds intended for the growth of science in non-Anglophone countries [ 39 ]. Iranian and Chinese authors are often caught submitting fake reviews, resulting in mass retractions by large publishers [ 38 ]. Several suggestions have been made to overcome this issue, with assigning independent reviewers and requesting their ORCID IDs viewed as the most practical options [ 40 ].

Conclusions

The peer review process is regulated by publishers and editors, enforcing updated global editorial recommendations. Selecting the best reviewers and providing authors with constructive comments may improve the quality of published articles. Reviewers are selected in view of their professional backgrounds and skills in research reporting, statistics, ethics, and language. Quality reviewer comments attract superior submissions and add to the journal’s scientific prestige [ 41 ].

In the era of digitization and open science, various online tools and platforms are available to upgrade the peer review and credit experts for their scholarly contributions. With its links to the ORCID platform and social media channels, Publons now offers the optimal model for crediting and keeping track of the best and most active reviewers. Publons Academy additionally offers online training for novice researchers who may benefit from the experience of their mentoring editors. Overall, reviewer training in how to evaluate journal submissions and avoid related misconduct is an important process, which some indexed journals are experimenting with [ 42 ].

The timelines and rigour of the peer review may change during the current pandemic. However, journal editors should mobilize their resources to avoid publication of unchecked and misleading reports. Additional efforts are required to monitor published contents and encourage readers to post their comments on publishers’ online platforms (blogs) and other social media channels [ 43 , 44 ].

The authors declare no conflict of interest.

APS

How Many Peer-Reviewed Articles Do You Need to Earn Tenure?

Updated norms, historical trends, and strategies of successful candidates.

  • Career Development - Observer

how to write an article for a peer reviewed journal

James P. Byrnes , of the Department of Psychological Studies in Education of Temple University, is a developmental psychologist with a primary interest in the factors predictive of academic achievement, as well as in the developmental trajectory of publishing expertise in psychology faculty. He can be contacted at  [email protected]

When considering a faculty member for tenure, decision-makers typically have some unofficial numerical standard for the candidate’s publishing history. But this and other assumed benchmarks seem to be passed around mostly by word of mouth. Those decision-makers also assume that each cohort of newly minted faculty members publishes more articles before earning tenure compared with previous cohorts.  

In the mid-1990s, I was serving as associate dean for faculty. One of my responsibilities was to shepherd untenured faculty through the tenure process. Colleagues within my institution and around the United States told me that, whereas untenured people used to be required to publish two articles per year to get tenure, they now need to publish four. Having reviewed a number of promotion and tenure cases for both my own institution and others, and having already published several articles on productivity, I knew this assertion was probably untrue. But I needed proof. So, I conducted a study in the mid-2000s to see how often untenured faculty actually published.  

Because I expected the findings to be challenged or dismissed if they did not reveal an increasing rate of publishing over time, I wanted to make as strong a case as possible. First, I focused on nearly 300 faculty who obtained tenure in the top 25 psychology programs (as rated by U.S. News and World Report ). The data would be harder to dismiss if they showed no increases in published articles with each successive decade in these individuals than if I included faculty from lower-ranked programs.  

Second, I focused only on developmental, social, and cognitive psychologists; published studies of productivity showed that faculty in these programs tend to publish at a higher rate than clinical, counseling, school, and educational psychologists. After locating the faculty via websites and examining their CVs or looking up their publications on PsycINFO, I found that the rate of publishing was nearly identical regardless of decade: roughly 1.58 articles per year. I also found that faculty whose records placed them below the 25 th percentile published less than one article per year.  

When I submitted the study to Psychological Science in 2007, the editor and reviewers were so surprised that they asked me to do several supplemental analyses, such as considering the impact factors of the journals and publication of chapters. None of the alternative explanations could account for the lack of increases. After the article was published, I received many emails from untenured folks thanking me, reporting that their senior faculty were telling them the norms had changed when my data showed they had not. 

Fast forward to 2023: When I recently discussed several tenure cases and reviewed cases for untenured faculty at other institutions, I discovered that the “norms have increased” meme had reemerged. I initially tried to ignore this claim or dissuade others, based on my earlier research and other published studies. But I began to wonder whether things might have indeed changed. 

First, as reported in gradPsych , a now defunct publication for graduate students, the economic collapse of 2008 caused a number of institutions to receive decreased revenue from state budgets, resulting in several years of hiring freezes. Second, the droves of psychology faculty hired in the 1970s did not retire in the 2000s and 2010s as expected, prompting concerns that the demand for academic jobs in psychology may exceed supply. In addition, other gradPsych articles reported that many institutions replaced tenured positions with contingent positions. Together, budget cuts and loss of tenure-line positions would severely restrict the supply of open positions, and the resulting spike in competition would leave only the top performers landing jobs.  

Finally, the number of recent psychology PhDs seeking postdoctoral fellowships—once a rarity—started to increase in the early 2000s. In the sample I analyzed for my follow-up study, I found that 73% of the faculty had 1–5 years of postdoctoral experience, suggesting the practice is now the standard. Without teaching demands, these postdoctoral fellows presumably have extra time to publish. One might expect that these individuals would have published more in their first 7 years after completing graduate school than previous recent graduates who had not completed postdocs. 

The methodology for the follow-up study consisted of looking at the publishing rates of faculty who earned their degrees between 2000 and 2009 (Decade 1) or between 2010 and 2016 (Decade 2) and who now hold positions at the same top 25 departments that I examined in my 2007 paper. Because the number of positions was halved since my earlier paper, I added faculty from another 11 schools that were rated a notch below the top 25 in the U.S. News rankings to bring the sample size to 300. I once again relied on their CVs and PsycINFO for publication data. 

Whereas faculty in the 2007 article published at a rate of 1.58 articles per year (11 total), the more recent cohorts published at a rate of 2.43 articles per year (17 total). So, yes, the more recent cohorts are publishing more, amounting to about one more article per year (or 6 more total). However, the distribution was highly skewed: Most centered on close to two articles per year, but a few (11%) published four or more articles per year (see Figure 1).  

Relatedly, I found no statistically significant differences between developmental (2.48 articles per year), cognitive (2.29 articles per year), and social psychologists (2.56 articles per year). I also found no significant difference between faculty who hold positions in the top 25 programs and those in the next tier. There was a significant difference between those who earned their degrees in the 2000s (Decade 1) and those who earned their degrees after 2010. However, a growth curve analysis showed that the rate of change over the 7 years analyzed did not differ between the two groups. The Decade 2 group had more publications before getting hired and this gap was sustained over the subsequent years (see Figure 2). 

how to write an article for a peer reviewed journal

Figure 2 also shows that the publishing rate increased each year for the first 5 years, as faculty gained publishing expertise, and then plateaued. (Note: This appears to hold more for Decade 2 than for Decade 1.) How (or why) did the more recent cohorts publish a little more than prior cohorts?  

First, as noted, 73% of the faculty who posted their CVs completed postdoctoral fellowships, often with a neuroscientific focus. Second, colleagues and mentors may be telling these faculty that they need to publish more than two articles per year, leading them to find creative ways to publish six more articles in total than prior cohorts. Many created collaborative teams with untenured faculty at other institutions or published short (1–2 page) commentaries or neuroscience pieces in addition to regular length articles. Finally, the competition for jobs is much stronger than in the past: Whereas 151 faculty in the present sample obtained a tenured position between 2000 and 2006, only 75 obtained a tenured position between 2010 and 2016, suggesting open positions fell by half. This may advantage those unusual individuals who published a great deal in their predoctoral and subsequent years.   

These findings lead to a question: What should the standards be for untenured faculty who work at institutions ranked lower than those examined here, and who are not fortunate enough to have access to untenured collaborators, a cadre of talented and motivated graduate students, or fMRI equipment? Given the data here, it seems like the standard of two articles per year still applies. 

Feedback on this article? Email  [email protected]  or login to comment.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

how to write an article for a peer reviewed journal

Vazire Outlines Goals for Transparency, Diversity in Psychological Science

The new Editor-in-Chief of APS’s flagship journal plans new steps to promote continued rigor and transparency in the publication’s content.

how to write an article for a peer reviewed journal

Failure and Flourishing: Three Lessons From Psychological Science

Why not make our setbacks more visible? An excerpt from How Do We Know Ourselves? Curiosities and Marvels of the Human Mind , the newest book by psychological scientist David G. Myers.

how to write an article for a peer reviewed journal

P is for Problem, Publish, and Psychology: Multilingual Scholars and the Challenges of Publishing in English

Two Filipina researchers advocate for broader representation in academic psychology and outline considerations for others whose first language is not English.

Privacy Overview

share this!

February 19, 2024 report

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

AI-generated disproportioned rat genitalia makes its way into peer-reviewed journal

by Bob Yirka , Phys.org

rat anatomy

The editors at the journal Frontiers in Cell and Developmental Biology have retracted a paper after it was pointed out to them by readers that supporting images had been generated improperly by an AI image generator. In their retraction, the editors report that the reason for the retraction was that "concerns were raised regarding the nature of its AI-generated figures."

In the article, which involved research surrounding stem cells in small mammals , the authors included images depicting rat anatomy that an AI system had clearly created. In one picture, a single rat appeared to have a penis and testicles that were larger than the rest of its body—not something that occurs in nature. Some of the accompanying text was also incomprehensible. Another image showed a rat cell that did not resemble the true structure of a rat cell.

The disproportioned images in the paper are likely to add to ongoing discussions in the science community surrounding the use of AI in generating text or imagery for use in technical papers—particularly those published in established journals.

In this case, it is not clear how such problematic images wound up in a peer-reviewed journal. The authors, a combined team from Hong Hui Hospital and Jiaotong University in China, did not try to hide the fact that they had used AI to create the images ; they even credited Midjourney.

Some in the press have noted that Frontiers has a policy that allows for the use of AI-generated materials as long as their use is disclosed, which was the case in this instance. But the policy also notes that attempts must be made to fact-check anything produced by such systems, which clearly was not the case in this mix-up.

The editors at Frontiers initially posted a note on the paper claiming that the article had been corrected and that a new version would be published in short order. Not much later, the paper was retracted.

The mistakes made by the authors of the paper and the team at the journal that approved its publication are likely to be the first of many to come, though it is still not clear what changes will be required to prevent such mistakes from happening in the future.

© 2024 Science X Network

Explore further

Feedback to editors

how to write an article for a peer reviewed journal

Junk DNA in birds may hold key to safe, efficient gene therapy

3 hours ago

how to write an article for a peer reviewed journal

Quantum dark states lead to an advantage in noise reduction

16 hours ago

how to write an article for a peer reviewed journal

Scientists create method to bond hydrogels and other polymeric materials using chitosan

17 hours ago

how to write an article for a peer reviewed journal

Chameleons inspire new multicolor 3D-printing technology

how to write an article for a peer reviewed journal

Researchers find worsening distress among Latinos in the United States

how to write an article for a peer reviewed journal

Birds have been adapting to human activity for millennia, research suggests

how to write an article for a peer reviewed journal

Anoxic marine basins are among the best candidates for deep-sea carbon sequestration, say scientists

19 hours ago

how to write an article for a peer reviewed journal

Spy satellite images offer insights into historical ecosystem changes

how to write an article for a peer reviewed journal

Common plant could help reduce food insecurity, researchers find

20 hours ago

how to write an article for a peer reviewed journal

Brightest and fastest-growing: Astronomers identify record-breaking quasar

Relevant physicsforums posts, left atrial appendage (laa) closure for prophylactic a-fib treatment.

Feb 17, 2024

PFAS and Power Lines Cause Cancer?

Feb 16, 2024

Makoy Samuel Yibi and the Guinea worm

Feb 10, 2024

Difference between symbionts and parasites

Feb 8, 2024

Discovery of An Annotated Work by Vesalius

Feb 7, 2024

Long-Term Effects/Risks/Vulnerabilities of Having Had COVID

More from Biology and Medical

Related Stories

how to write an article for a peer reviewed journal

Retracted anti-abortion paper contained undisclosed conflicts of interest

Jan 11, 2023

how to write an article for a peer reviewed journal

Researcher develops filter to tackle 'unsafe' AI-generated images

Nov 13, 2023

how to write an article for a peer reviewed journal

OpenAI announces Point-E, a machine learning system that quickly creates 3D images from a text prompt

Dec 21, 2022

how to write an article for a peer reviewed journal

Top science publisher withdraws flawed climate study

Aug 24, 2023

how to write an article for a peer reviewed journal

Top science editor defends peer-review system in climate row

Sep 15, 2023

how to write an article for a peer reviewed journal

Paper by team claiming to have achieved superconductivity at room temperature retracted

Sep 29, 2022

Recommended for you

how to write an article for a peer reviewed journal

Unpacking social equity from biodiversity data: An interdisciplinary policy perspective

Jan 16, 2024

how to write an article for a peer reviewed journal

A whiff of tears reduces male aggression, says study

Dec 25, 2023

how to write an article for a peer reviewed journal

Solicitor in 19th-century Tasmania traded human Aboriginal remains for scientific accolades, study reveals

Nov 28, 2023

how to write an article for a peer reviewed journal

How larger body sizes helped the colonizers of New Zealand

Jul 13, 2023

how to write an article for a peer reviewed journal

Study examines centuries of identity lost because of slavery

Jul 6, 2023

how to write an article for a peer reviewed journal

The gap between male and female author-inventors: Who counts as an inventor?

Jun 7, 2023

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

AI gone wild —

Scientists aghast at bizarre ai rat with huge genitals in peer-reviewed article, it's unclear how such egregiously bad images made it through peer-review..

Beth Mole - Feb 15, 2024 11:16 pm UTC

An actual laboratory rat, who is intrigued.

Appall and scorn ripped through scientists' social media networks Thursday as several egregiously bad AI-generated figures circulated from a peer-reviewed article recently published in a reputable journal. Those figures—which the authors acknowledge in the article's text were made by Midjourney—are all uninterpretable. They contain gibberish text and, most strikingly, one includes an image of a rat with grotesquely large and bizarre genitals, as well as a text label of "dck."

AI-generated Figure 1 of the paper. This image is supposed to show spermatogonial stem cells isolated, purified, and cultured from rat testes.

The article in question is titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway," which was authored by three researchers in China, including the corresponding author Dingjun Hao of Xi’an Honghui Hospital. It was published online Tuesday in the journal Frontiers in Cell and Developmental Biology.

Frontiers did not immediately respond to Ars' request for comment, but we will update this post with any response.

Figure 2 is supposed to be a diagram of the JAK-STAT signaling pathway.

But the rat's package is far from the only problem. Figure 2 is less graphic but equally mangled. While it's intended to be a diagram of a complex signaling pathway, it instead is a jumbled mess. One scientific integrity expert questioned whether it provided an overly complicated explanation of "how to make a donut with colorful sprinkles." Like the first image, the diagram is rife with nonsense text and baffling images. Figure 3 is no better, offering a collage of small circular images that are densely annotated with gibberish. The image is supposed to provide visual representations of how the signaling pathway from Figure 2 regulates the biological properties of spermatogonial stem cells.

Some scientists online questioned whether the article's text was also AI-generated. One user noted that AI detection software determined that it was likely to be AI-generated; however, as Ars has reported previously, such software is unreliable .

Figure 3 is supposed to show the regulation of biological properties of spermatogonial stem cells by JAK/STAT signaling pathway.

The images, while egregious examples, highlight a growing problem in scientific publishing. A scientist's success relies heavily on their publication record, with a large volume of publications, frequent publishing, and articles appearing in top-tier journals, all of which earn scientists more prestige. The system incentivizes less-than-scrupulous researchers to push through low-quality articles, which, in the era of AI chatbots, could potentially be generated with the help of AI. Researchers worry that the growing use of AI will make published research less trustworthy. As such, research journals have recently set new authorship guidelines for AI-generated text to try to address the problem. But for now, as the Frontiers article shows, there are clearly some gaps.

reader comments

Channel ars technica.

IMAGES

  1. Literature review article example

    how to write an article for a peer reviewed journal

  2. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    how to write an article for a peer reviewed journal

  3. (PDF) How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    how to write an article for a peer reviewed journal

  4. [PDF] HOW TO WRITE AN ARTICLE ABOUT A STUDY FOR A PEER-REVIEWED JOURNAL

    how to write an article for a peer reviewed journal

  5. Formatting a Peer-Reviewed Journal Article in APA Style (6th ed.) With

    how to write an article for a peer reviewed journal

  6. 🌷 How to write an article review apa. How to Write a Literature Review

    how to write an article for a peer reviewed journal

COMMENTS

  1. PDF A Guide to Peer Reviewing Journal Articles

    The benefits include: Learning more about the editorial process. By reviewing a paper and liaising with the editorial office, you will gain first-hand experience of the key considerations that go into the publication decision, as well as commonly recommended revisions. Keeping up to date with novel research in your field.

  2. How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    Abstract Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper.

  3. How to Write a Peer Review

    1. Summary of the research and your overall impression In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript's strengths and weaknesses.

  4. Writing for publication: Structure, form, content, and journal

    This article provides an overview of writing for publication in peer-reviewed journals. While the main focus is on writing a research article, it also provides guidance on factors influencing journal selection, including journal scope, intended audience for the findings, open access requirements, and journal citation metrics.

  5. A step-by-step guide to peer review: a template for patients and novice

    Peer reviews are commonly submitted via website portals, which vary widely in design and functionality; as such, reviewers are encouraged to decide how to best use the template on a case-by-case basis.

  6. How to write a peer review

    5 minute read Learning how to write a constructive peer review is an essential step in helping to safeguard the quality and integrity of published literature.

  7. How to peer review journal articles

    Learning more about the editorial process. Keeping up to date with novel research in your field. Having an opportunity to demonstrate your expertise in a field, and fulfilling a professional responsibility to contribute that expertise to others as they develop their research. Being acknowledged and credited for your work.

  8. How to Write a Peer Review: 12 things you need to know

    3) Skim the paper very quickly to get a general sense of the article. Underline key words and arguments, and summarise key points. This will help you quickly "tune in" to the paper during the next read. 4) Sit in a quiet place and read the manuscript critically. Make sure you have the tables, figures and references visible.

  9. How to write a thorough peer review

    article CAREER COLUMN 08 October 2018 How to write a thorough peer review Scientists receive too little peer-review training. Here's one method for effectively peer-reviewing papers,...

  10. PDF How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    Look at examples from your target journal to decide the appropriate length. This section should include the elements shown in Fig. 1. Begin with a general context, narrowing to the specific focus of the pa-per. Include five main elements: why your research is im-portant, what is already known about the topic, the gap.

  11. How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    Select a Target Journal Early in the Writing Process We recommend that you select a "target journal" early in the writing process; a "target journal" is the journal to which you plan to submit your paper. Each journal has a set of core readers and you should tailor your writing to this readership.

  12. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  13. How do I peer-review a scientific article?—a personal perspective

    Go to: Abstract Peer-review is an essential activity for the vast majority of credited scientific journals and represents the cornerstone for assessing the quality of potential publications, since it is substantially aimed to identify drawbacks or inaccuracies that may flaw the outcome or the presentation of scientific research.

  14. Step by Step Guide to Reviewing a Manuscript

    Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections. The first read-through is a skim-read.

  15. How to Perform a Peer Review

    You've been invited to peer review an article. Now what? Our step-by-step guides walk you through conducting your review.

  16. How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common pitfalls for each section and recommend strategies to avoid them. Further, we give advice about target journal selection and authorship.

  17. How to conduct a review

    If you don't spot any major flaws, take a break from the manuscript, giving you time to think. Consider the article from your own perspective. When you sit down to write the review, again make sure you familiarize yourself with any journal-specific guidelines (these will be noted in the journal's guide for authors). 3.

  18. 3.3 How to peer review a review article

    Peer review is an essential part of publishing review articles, just like it is for research articles. Yet, without any experimental procedures to evaluate, data to assess, or conclusions to interpret, it isn't always clear what you're being asked to comment on or what standards you might use to recommend a manuscript for publication. This ...

  19. How to Write Peer-Reviewed Journal Articles

    "All young ophthalmologists should consider writing a peer-reviewed article," said Dr. Jampel. "This variety is important in keeping a professional life interesting. Furthermore, the sense of accomplishment from having produced a piece of work that will be widely viewed, and that potentially can positively influence patient care, is satisfying."

  20. How to Write a Peer Review for an Academic Journal: Six Steps from

    May 9, 2012 13 PhD2Published has several informative posts about writing journal articles, and more recently has featured a post outlining a potentially revolutionary collaborative peer review process for this kind of publishing. Todays post offers an alternative perspective; that of the journal article peer reviewer.

  21. Reviewing review articles

    A review article is written to summarize the current state of understanding on a topic, and peer reviewing these types of articles requires a slightly different set of criteria compared with empirical articles. Unless it is a systematic review/meta-analysis methods are not important or reported. The quality of a review article can be judged on ...

  22. Journal Articles

    This page has information and resources on authoring journal articles and choosing a journal to submit to. Since journal articles are by far the most common venue for submitting research, this page will be the most detailed. Note: If you are a graduate student, ask your supervisor/advisor if there are expectations for where you submit.

  23. How to Review a Journal Article

    Writing Handouts How to Review a Journal Article For many kinds of assignments, like a literature review, you may be asked to offer a critique or review of a journal article. This is an opportunity for you as a scholar to offer your qualified opinion and evaluation of how another scholar has composed their article, argument, and research.

  24. A guide for social science journal editors on easing into open science

    So, journal editors can outsource part, or all, of the peer review process. Turning to journal-focused rather than author-focused initiatives that support transparency, journals can adopt Transparent Peer Review, in which they make the peer review reports, editorial decisions , and author reply letters openly available alongside published ...

  25. Peer review guidance: a primer for researchers

    Go to: Editorial recommendations and the best reviewers Guidance on peer review and selection of reviewers is currently available in the recommendations of global editorial associations which can be consulted by journal editors for updating their ethics statements and by research managers for crediting the evaluators.

  26. How Many Peer-Reviewed Articles Do You Need to Earn Tenure?

    Whereas faculty in the 2007 article published at a rate of 1.58 articles per year (11 total), the more recent cohorts published at a rate of 2.43 articles per year (17 total). So, yes, the more recent cohorts are publishing more, amounting to about one more article per year (or 6 more total).

  27. AI-generated disproportioned rat genitalia makes its way into peer

    This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

  28. Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed

    On Thursday, the publisher of the review article, Frontiers, posted an "expression of concern," noting that it is aware of concerns regarding the published piece. "An investigation is currently ...