NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Paschalis Karakasis; Konstantinos I. Bougioukas; Konstantinos Pamporis; Nikolaos Fragakis; Anna-Bettina Haidich – Research Synthesis Methods, 2024
This study aimed to assess the methods and outcomes of The Measurement Tool to Assess systematic Reviews (AMSTAR) 2 appraisals in overviews of reviews (overviews) of interventions in the cardiovascular field and identify factors that are associated with these outcomes. MEDLINE, Scopus, and the Cochrane Database of Systematic Reviews were searched…
Descriptors: Human Body, Intervention, Literature Reviews, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Lu Qin; Shishun Zhao; Wenlai Guo; Tiejun Tong; Ke Yang – Research Synthesis Methods, 2024
The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be…
Descriptors: Meta Analysis, Network Analysis, Bayesian Statistics, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Guido Schwarzer; Gerta Rücker; Cristina Semaca – Research Synthesis Methods, 2024
The "LFK" index has been promoted as an improved method to detect bias in meta-analysis. Putatively, its performance does not depend on the number of studies in the meta-analysis. We conducted a simulation study, comparing the "LFK" index test to three standard tests for funnel plot asymmetry in settings with smaller or larger…
Descriptors: Bias, Meta Analysis, Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Yasushi Tsujimoto; Yusuke Tsutsumi; Yuki Kataoka; Akihiro Shiroshita; Orestis Efthimiou; Toshi A. Furukawa – Research Synthesis Methods, 2024
Meta-analyses examining dichotomous outcomes often include single-zero studies, where no events occur in intervention or control groups. These pose challenges, and several methods have been proposed to address them. A fixed continuity correction method has been shown to bias estimates, but it is frequently used because sometimes software (e.g.,…
Descriptors: Meta Analysis, Literature Reviews, Epidemiology, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Jona Lilienthal; Sibylle Sturtz; Christoph Schürmann; Matthias Maiworm; Christian Röver; Tim Friede; Ralf Bender – Research Synthesis Methods, 2024
In Bayesian random-effects meta-analysis, the use of weakly informative prior distributions is of particular benefit in cases where only a few studies are included, a situation often encountered in health technology assessment (HTA). Suggestions for empirical prior distributions are available in the literature but it is unknown whether these are…
Descriptors: Bayesian Statistics, Meta Analysis, Health Sciences, Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie E. Hunter; Mason Aberoumand; Sol Libesman; James X. Sotiropoulos; Jonathan G. Williams; Jannik Aagerup; Rui Wang; Ben W. Mol; Wentao Li; Angie Barba; Nipun Shrestha; Angela C. Webster; Anna Lene Seidler – Research Synthesis Methods, 2024
Increasing concerns about the trustworthiness of research have prompted calls to scrutinise studies' Individual Participant Data (IPD), but guidance on how to do this was lacking. To address this, we developed the IPD Integrity Tool to screen randomised controlled trials (RCTs) for integrity issues. Development of the tool involved a literature…
Descriptors: Integrity, Randomized Controlled Trials, Participant Characteristics, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Caspar J. Van Lissa; Eli-Boaz Clapper; Rebecca Kuiper – Research Synthesis Methods, 2024
The product Bayes factor (PBF) synthesizes evidence for an informative hypothesis across heterogeneous replication studies. It can be used when fixed- or random effects meta-analysis fall short. For example, when effect sizes are incomparable and cannot be pooled, or when studies diverge significantly in the populations, study designs, and…
Descriptors: Hypothesis Testing, Evaluation Methods, Replication (Evaluation), Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Yu-Kang Tu; Pei-Chun Lai; Yen-Ta Huang; James Hodges – Research Synthesis Methods, 2024
Network meta-analysis (NMA) incorporates all available evidence into a general statistical framework for comparing multiple treatments. Standard NMAs make three major assumptions, namely homogeneity, similarity, and consistency, and violating these assumptions threatens an NMA's validity. In this article, we suggest a graphical approach to…
Descriptors: Visualization, Meta Analysis, Comparative Analysis, Statistical Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantinos I. Bougioukas; Paschalis Karakasis; Konstantinos Pamporis; Emmanouil Bouras; Anna-Bettina Haidich – Research Synthesis Methods, 2024
Systematic reviews (SRs) have an important role in the healthcare decision-making practice. Assessing the overall confidence in the results of SRs using quality assessment tools, such as "A MeaSurement Tool to Assess Systematic Reviews 2" (AMSTAR 2), is crucial since not all SRs are conducted using the most rigorous methods. In this…
Descriptors: Programming Languages, Research Methodology, Decision Making, Medical Research
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan Tian; Xi Yang; Suhail A. Doi; Luis Furuya-Kanamori; Lifeng Lin; Joey S. W. Kwong; Chang Xu – Research Synthesis Methods, 2024
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two…
Descriptors: Risk, Randomized Controlled Trials, Classification, Robotics
Peer reviewed Peer reviewed
Direct linkDirect link
Lars König; Steffen Zitzmann; Tim Fütterer; Diego G. Campos; Ronny Scherer; Martin Hecht – Research Synthesis Methods, 2024
Several AI-aided screening tools have emerged to tackle the ever-expanding body of literature. These tools employ active learning, where algorithms sort abstracts based on human feedback. However, researchers using these tools face a crucial dilemma: When should they stop screening without knowing the proportion of relevant studies? Although…
Descriptors: Artificial Intelligence, Psychological Studies, Researchers, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Robert C. Lorenz; Mirjam Jenny; Anja Jacobs; Katja Matthias – Research Synthesis Methods, 2024
Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time,…
Descriptors: Decision Making, Time Management, Evaluation Methods, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
Miriam Hattle; Joie Ensor; Katie Scandrett; Marienke van Middelkoop; Danielle A. van der Windt; Melanie A. Holden; Richard D. Riley – Research Synthesis Methods, 2024
Individual participant data (IPD) meta-analysis projects obtain, harmonise, and synthesise original data from multiple studies. Many IPD meta-analyses of randomised trials are initiated to identify treatment effect modifiers at the individual level, thus requiring statistical modelling of interactions between treatment effect and participant-level…
Descriptors: Meta Analysis, Randomized Controlled Trials, Outcomes of Treatment, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhipeng Hou; Elizabeth Tipton – Research Synthesis Methods, 2024
Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while…
Descriptors: Meta Analysis, Research Reports, Identification, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Conor O. Chandler; Irina Proskorovsky – Research Synthesis Methods, 2024
In health technology assessment, matching-adjusted indirect comparison (MAIC) is the most common method for pairwise comparisons that control for imbalances in baseline characteristics across trials. One of the primary challenges in MAIC is the need to properly account for the additional uncertainty introduced by the matching process. Limited…
Descriptors: Predictor Variables, Influence of Technology, Evaluation Methods, Methods Research