Improving Peer Review: A Study on Structure
A trial shows structured questions can improve peer review consistency and quality.
― 5 min read
Table of Contents
Journal peer review is a process that helps decide if a research study should be published. Editors get papers from researchers and send them to other experts (Reviewers) to examine. Reviewers check if the study's methods, data, and conclusions are solid before giving their recommendation. They can recommend that the paper be published as is, ask for changes, or reject it outright.
Problems with Peer Review
Research has shown that the peer review system has many issues. Reviewers sometimes miss important problems with the research methods. They may not interpret results accurately, misuse references, or fail to report necessary details that would allow others to replicate the study. There can also be a lack of information needed to judge the study's quality or bias.
Many studies have found that different reviewers often do not agree with each other. For instance, one analysis found a low Agreement rate of only 34%. Even more recent data showed that in one round of reviews across many journals, only 30% of reviewers agreed with each other.
A part of this disagreement can come from the lack of clear Questions that reviewers are supposed to answer. In other words, if they do not have specific guidelines, it can lead to different opinions on a paper's quality.
The Elsevier Pilot Study
In August 2022, a trial was run by Elsevier to test a set of structured questions that reviewers could use when evaluating submissions. This pilot study involved 220 journals from different fields and impact factor levels. The questions were meant to guide reviewers but were not mandatory, meaning they could choose to skip any questions they did not want to answer.
The goals of this pilot study included checking how reviewers used the new questions, looking at how much agreement there was between reviewers, comparing agreement rates to those journals' past performance, and improving the question set.
Review Process Overview
Reviewers were asked to answer nine specific questions related to the research being reviewed. These questions covered things like if the study's objectives were clear, if the methods were detailed, and if the results were interpreted correctly. Some questions allowed for yes, no, or not applicable answers, while others were open-ended, allowing reviewers to explain their thoughts. After these questions, reviewers could also leave additional comments for both the authors and the editors.
Reviewers were also expected to give a final recommendation, such as whether the paper should be accepted, revised, or rejected.
Analyzing Reviewer Responses
The study analyzed how reviewers answered the structured questions and counted the number of words in their responses. This was done by looking at how many reviewers answered the questions properly or directed answers to the comments section.
Comment sections showed varied responses. Some reviewers used the space to add on to their answers, while others copied complete review reports they had prepared earlier.
Comments from Reviewers
Out of the reviewers, a large majority utilized the comments section to add their thoughts. Many crafted traditional review reports, which often contained summaries and detailed feedback. The word counts of these comments varied significantly, with traditional reports being longer than those using the structured questions alone.
When looking at how well these comment sections covered the structured questions, most traditional reviews addressed four or five out of the nine questions, usually focusing on methods and interpretations rather than strengths or limitations.
Notably, reviewers who filled out comments for the authors were also more likely to leave notes for the editors.
Reviewing Agreement Among Reviewers
The study looked closely at how much agreement there was between reviewers on the structured questions. The highest agreement occurred when evaluating the paper's organization, while the lowest was on interpreting results and methods.
When it came to the final Recommendations, about 41% of reviewers agreed with each other, which is a significant improvement compared to past peer reviews that showed only 30% agreement. However, no notable differences emerged in agreement based on the field of study or the journal's impact factor.
Refining the Review Questions
After analyzing the results of the pilot study, suggestions were made to improve the structured questions. Initially, some questions were worded in a way that made it unclear what a "yes" or "no" answer meant. Therefore, the questions were rephrased to have consistent language.
New questions were also created, targeting standard sections of a paper. The goal was to have a clear and simplified way for reviewers to assess papers, with a better mix of yes/no and open-text options to allow for more constructive feedback.
Conclusion on Structured Peer Review
The pilot study showed that using structured peer review questions was well received by reviewers. They responded to the questions adequately, and the rates of agreement among them were better than in traditional peer reviews.
It's recommended that journals using structured peer review inform their reviewers about the questions early on, so they can keep them in mind when assessing papers. This approach could create a more consistent and efficient review process, ultimately leading to better decisions by editors and improved papers from authors.
Encouraging authors to use the same questions for self-evaluation may also help them enhance their submissions before sending them for review.
Overall, structured peer review could provide a clearer framework, leading to fairer assessments and improvements in research quality.
Title: Structured Peer Review: Pilot results from 23 Elsevier Journals
Abstract: BackgroundReviewers rarely comment on the same aspects of a manuscript, making it difficult to properly assess manuscripts quality and the quality of the peer review process. It was the goal of this pilot study to evaluate structured peer review implementation by: 1) exploring if and how reviewers answered structured peer review questions, 2) analysing reviewer agreement, 3) comparing that agreement to agreement before implementation of structured peer review, and 4) further enhancing the piloted set of structured peer review questions. MethodsStructured peer review consisting of 9 questions was piloted in August 2022 in 220 Elsevier journals. We randomly selected 10% of these journals across all fields and IF quartiles and included manuscripts that in the first 2 months of the pilot received 2 reviewer reports, leaving us with 107 manuscripts belonging to 23 journals. Eight questions had open ended fields, while the ninth question (on language editing) had only a yes/no option. Reviews could also leave Comments-to-Author and Comments-to-Editor. Answers were qualitatively analysed by two raters independently. ResultsAlmost all reviewers (n=196, 92%) filled out the answers to all questions even though these questions were not mandatory in the system. The longest answer (Md 27 words, IQR 11 to 68) was for reporting methods with sufficient details for replicability or reproducibility. Reviewers had highest (partial) agreement (of 72%) for assessing the flow and structure of the manuscript, and lowest (of 53%) for assessing if interpretation of results are supported by data, and for assessing if statistical analyses were appropriate and reported in sufficient detail (also 52%). Two thirds of reviewers (n=145, 68%) filled out the Comments-to-Author section, of which 105 (49%) resembled traditional peer review reports. Such reports contained a Md of 4 (IQR 3 to 5) topics covered by the structured questions. Absolute agreement regarding final recommendations (exact match of recommendation choice) was 41%, which was higher than what those journals had in the period of 2019 to 2021 (31% agreement, P=0.0275). ConclusionsOur preliminary results indicate that reviewers adapted to the new format of review successfully, and answered more topics than they covered in their traditional reports. Individual question analysis indicated highest disagreement regarding interpretation of results and conducting and reporting of statistical analyses. While structured peer review did lead to improvement in reviewer final recommendation agreements, this was not a randomized trial, and further studies should be done to corroborate this. Further research is also needed to determine if structured peer review leads to greater knowledge transfer or better improvement of manuscripts.
Authors: Mario Malički, M. Malicki, B. Mehmani
Last Update: 2024-02-04 00:00:00
Language: English
Source URL: https://www.biorxiv.org/content/10.1101/2024.02.01.578440
Source PDF: https://www.biorxiv.org/content/10.1101/2024.02.01.578440.full.pdf
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to biorxiv for use of its open access interoperability.