Sci Simple

New Science Research Articles Everyday

# Health Sciences # Health Systems and Quality Improvement

Evaluating Trust in Systematic Reviews

A look into the trustworthiness of systematic reviews in research.

Jack Wilkinson, Calvin Heal, Georgios A Antoniou, Ella Flemyng, Love Ahnström, Alessandra Alteri, Alison Avenell, Timothy Hugh Barker, David N Borg, Nicholas JL Brown, Rob Buhmann, Jose A Calvache, Rickard Carlsson, Lesley-Anne Carter, Aidan G Cashin, Sarah Cotterill, Kenneth Färnqvist, Michael C Ferraro, Steph Grohmann, Lyle C Gurrin, Jill A Hayden, Kylie E Hunter, Natalie Hyltse, Lukas Jung, Ashma Krishan, Silvy Laporte, Toby J Lasserson, David RT Laursen, Sarah Lensen, Wentao Li, Tianjing Li, Jianping Liu, Clara Locher, Zewen Lu, Andreas Lundh, Antonia Marsden, Gideon Meyerowitz-Katz, Ben W Mol, Zachary Munn, Florian Naudet, David Nunan, Neil E O’Connell, Natasha Olsson, Lisa Parker, Eleftheria Patetsini, Barbara Redman, Sarah Rhodes, Rachel Richardson, Martin Ringsten, Ewelina Rogozińska, Anna Lene Seidler, Kyle Sheldrick, Katie Stocking, Emma Sydenham, Hugh Thomas, Sofia Tsokani, Constant Vinatier, Colby J Vorland, Rui Wang, Bassel H Al Wattar, Florencia Weber, Stephanie Weibel, Madelon van Wely, Chang Xu, Lisa Bero, Jamie J Kirkham

― 6 min read


Trust Issues in Research Trust Issues in Research clinical trial studies. Investigating the reliability of
Table of Contents

Systematic Reviews are like detective work for researchers. They gather all the studies on a specific topic to see what the evidence says. Think of it as comparing notes after a big group project to see if everyone was on the same page. In these reviews, researchers prefer to include randomized controlled trials (RCTs), which are studies where people are assigned to different treatments by chance, to ensure fairness.

The Importance of Validity

For a systematic review to be trustworthy, it must include RCTs that are valid. Validity means the study accurately measures what it claims to measure. Imagine you bought a fancy scale to weigh yourself, but it’s broken and shows you weigh 100 pounds less than you actually do. Not valid! Researchers use tools called Risk of Bias tools to assess whether the studies they include are honest and dependable.

Trust No Longer a Default

However, things have changed. Recent investigations into RCTs have raised eyebrows. Some studies that made it into journals may not be telling the whole story. It’s like finding out that your group project partner secretly copied their part from a random website instead of doing their own work. When this happens, fake studies can end up guiding patient care, and that’s a problem.

What Are Problematic Studies?

Cochrane, a well-known organization in the research world, defines "problematic studies" as those with major doubts about their Trustworthiness. This could be due to someone cheating, or simply making significant mistakes during the study. They have decided that studies with serious issues shouldn’t go into systematic reviews. This leads to the big question: how do we figure out if a study is trustworthy or problematic?

Developing a New Tool: INSPECT-SR

To tackle this issue, researchers are working on a new tool called INSPECT-SR (INveStigating ProblEmatic Clinical Trials in Systematic Reviews). The goal is to create a checklist that can help reviewers determine if RCTs are reliable. They’ve made a long list of potential checks, but testing them will take some time.

The Research Study

In a project to develop this tool, researchers applied 72 different checks to RCTs taken from 50 Cochrane Reviews. They wanted to see how practical these checks were and what impact they had on the results of the reviews. This aspect was essential because if the checks are too complicated or take too long, reviewers won’t want to use them.

Who Did the Checking?

The INSPECT-SR working group included a mix of research experts. They invited several reviewers to help assess the studies. These reviewers didn’t need to have special qualifications in research integrity, because the idea was to see how everyday reviewers would handle the checks.

Selecting Reviews and Trials

To select the Cochrane Reviews, researchers looked for ones published recently that met certain criteria. They avoided reviews where the assessors had any connection to ensure fairness. They also wanted to focus on reviews that included RCTs suitable for analysis.

Extracting Data and Trustworthiness Assessment

Reviewers were given a form to fill out while examining the studies. They looked at things like whether the results were clear and if the authors of the studies were trustworthy. They used available software that could help with some checks, but there was no requirement to use any particular program.

How Did the Reviews Go?

During this process, assessors took a good look at each RCT and gave them scores based on the checks. They had to evaluate things like whether the study’s data was shared publicly or whether all authors contributed equally. After assessing, they recorded if they had concerns about each study's trustworthiness.

Results of the Checking Process

After looking at 95 RCTs from the 50 Cochrane Reviews, assessors found that a significant number of studies raised some level of concern. While most studies had no major issues, about 25% had some concerns, and around 6% raised serious flags.

Feasibility of the Checks

Throughout the assessment, some checks were deemed impractical. For instance, tracking if a researcher had copied work from others or if they had correctly reported their findings was tough. It required more effort than it was worth, showing that a balance needs to be struck between thoroughness and practicality.

Identifying Problematic Studies

The checks aimed to highlight problematic studies, which often fell short in areas like prior registration of trials (the process of letting people know the study is taking place) and sharing data. Many RCTs were not registered in advance, which raised concerns about their authenticity.

The Need for Caution

While the findings might sound alarming, it’s important to keep in mind that the study design had limitations. The reviewers only looked at meta-analyses with a small number of trials, which could inflate the number of reviews that ended up with no usable studies after applying the checks.

Looking Ahead: Development of INSPECT-SR

The results from this study will help shape the final INSPECT-SR tool. Researchers are keen to create a practical tool with useful checks that systematic reviewers can use without feeling overwhelmed.

Conclusion

Overall, the study reveals that checking the trustworthiness of RCTs is essential. It suggests that problematic studies are not as rare as one might think, and current methods may not catch all the issues. In the age of information, keeping our research reliable is as important as ensuring our grocery lists don't include pizza rolls when we're on a diet. The path forward involves refining the checks, getting feedback, and creating a tool that can be practically used in everyday research.

The Future of Trustworthiness in Research

This marks the beginning of a new era where the honesty and reliability of studies will be scrutinized more closely. As researchers keep working on making systematic reviews better, it’s like polishing a shiny apple—it may take some effort, but it results in something everyone can trust. And who doesn’t want to bite into a trustworthy apple?

Let’s stay tuned for what’s next in the world of research and how these new tools will help keep the science community accountable. After all, when it comes to health and medicine, we all want to make sure we’re getting the real deal!

Original Source

Title: Assessing the feasibility and impact of clinical trial trustworthiness checks via an application to Cochrane Reviews: Stage 2 of the INSPECT-SR project

Abstract: BackgroundThe aim of the INSPECT-SR project is to develop a tool to identify problematic RCTs in systematic reviews. In Stage 1 of the project, a list of potential trustworthiness checks was created. The checks on this list must be evaluated to determine which should be included in the INSPECT-SR tool. MethodsWe attempted to apply 72 trustworthiness checks to RCTs in 50 Cochrane Reviews. For each, we recorded whether the check was passed, failed or possibly failed, or whether it was not feasible to complete the check. Following application of the checks, we recorded whether we had concerns about the authenticity of each RCT. We repeated each meta-analysis after removing RCTs flagged by each check, and again after removing RCTs where we had concerns about authenticity, to estimate the impact of trustworthiness assessment. Trustworthiness assessments were compared to Risk of Bias and GRADE assessments in the reviews. Results95 RCTs were assessed. Following application of the checks, assessors had some or serious concerns about the authenticity of 25% and 6% of the RCTs, respectively. Removing RCTs with either some or serious concerns resulted in 22% of meta-analyses having no remaining RCTs. However, many checks proved difficult to understand or implement, which may have led to unwarranted scepticism in some instances. Furthermore, we restricted assessment to meta-analyses with no more than 5 RCTs, which will distort the impact on results. No relationship was identified between trustworthiness assessment and Risk of Bias or GRADE. ConclusionsThis study supports the case for routine trustworthiness assessment in systematic reviews, as problematic studies do not appear to be flagged by Risk of Bias assessment. The study produced evidence on the feasibility and impact of trustworthiness checks. These results will be used, in conjunction with those from a subsequent Delphi process, to determine which checks should be included in the INSPECT-SR tool. Plain language summarySystematic reviews collate evidence from randomised controlled trials (RCTs) to find out whether health interventions are safe and effective. However, it is now recognised that the findings of some RCTs are not genuine, and some of these studies appear to have been fabricated. Various checks for these "problematic" RCTs have been proposed, but it is necessary to evaluate these checks to find out which are useful and which are feasible. We applied a comprehensive list of "trustworthiness checks" to 95 RCTs in 50 systematic reviews to learn more about them, and to see how often performing the checks would lead us to classify RCTs as being potentially inauthentic. We found that applying the checks led to concerns about the authenticity of around 1 in 3 RCTs. However, we found that many of the checks were difficult to perform and could have been misinterpreted. This might have led us to be overly sceptical in some cases. The findings from this study will be used, alongside other evidence, to decide which of these checks should be performed routinely to try to identify problematic RCTs, to stop them from being mistaken for genuine studies and potentially being used to inform healthcare decisions. What is newO_LIAn extensive list of potential checks for assessing study trustworthiness was assessed via an application to 95 randomised controlled trials (RCTs) in 50 Cochrane Reviews. C_LIO_LIFollowing application of the checks, assessors had concerns about the authenticity of 32% of the RCTs. C_LIO_LIIf these RCTs were excluded, 22% of meta-analyses would have no remaining RCTs. C_LIO_LIHowever, the study showed that some checks were frequently infeasible, and others could be easily misunderstood or misinterpreted. C_LIO_LIThe study restricted assessment to meta-analyses including five or fewer RCTs, which might distort the impact of applying the checks. C_LI

Authors: Jack Wilkinson, Calvin Heal, Georgios A Antoniou, Ella Flemyng, Love Ahnström, Alessandra Alteri, Alison Avenell, Timothy Hugh Barker, David N Borg, Nicholas JL Brown, Rob Buhmann, Jose A Calvache, Rickard Carlsson, Lesley-Anne Carter, Aidan G Cashin, Sarah Cotterill, Kenneth Färnqvist, Michael C Ferraro, Steph Grohmann, Lyle C Gurrin, Jill A Hayden, Kylie E Hunter, Natalie Hyltse, Lukas Jung, Ashma Krishan, Silvy Laporte, Toby J Lasserson, David RT Laursen, Sarah Lensen, Wentao Li, Tianjing Li, Jianping Liu, Clara Locher, Zewen Lu, Andreas Lundh, Antonia Marsden, Gideon Meyerowitz-Katz, Ben W Mol, Zachary Munn, Florian Naudet, David Nunan, Neil E O’Connell, Natasha Olsson, Lisa Parker, Eleftheria Patetsini, Barbara Redman, Sarah Rhodes, Rachel Richardson, Martin Ringsten, Ewelina Rogozińska, Anna Lene Seidler, Kyle Sheldrick, Katie Stocking, Emma Sydenham, Hugh Thomas, Sofia Tsokani, Constant Vinatier, Colby J Vorland, Rui Wang, Bassel H Al Wattar, Florencia Weber, Stephanie Weibel, Madelon van Wely, Chang Xu, Lisa Bero, Jamie J Kirkham

Last Update: 2024-12-20 00:00:00

Language: English

Source URL: https://www.medrxiv.org/content/10.1101/2024.11.25.24316905

Source PDF: https://www.medrxiv.org/content/10.1101/2024.11.25.24316905.full.pdf

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to medrxiv for use of its open access interoperability.

Similar Articles