NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)10
Since 2006 (last 20 years)17
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Moerbeek, Mirjam; Safarkhani, Maryam – Journal of Educational and Behavioral Statistics, 2018
Data from cluster randomized trials do not always have a pure hierarchical structure. For instance, students are nested within schools that may be crossed by neighborhoods, and soldiers are nested within army units that may be crossed by mental health-care professionals. It is important that the random cross-classification is taken into account…
Descriptors: Randomized Controlled Trials, Classification, Research Methodology, Military Personnel
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sundell, Knut; Åhsberg, Elizabeth – Research on Social Work Practice, 2018
Objective: There is substantial evidence that poorly designed and reported research can mislead decision making in clinical care. This review investigates the methodological quality of Swedish trials of a wide array of psychological and social interventions. Method: The review includes 302 articles published in peer-reviewed journals during…
Descriptors: Randomized Controlled Trials, Pretests Posttests, Control Groups, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cole, Russell; Deke, John; Seftor, Neil – Society for Research on Educational Effectiveness, 2016
The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…
Descriptors: Educational Research, Research Design, Regression (Statistics), Multivariate Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wong, Vivian C.; Steiner, Peter M. – Society for Research on Educational Effectiveness, 2015
Across the disciplines of economics, political science, public policy, and now, education, the randomized controlled trial (RCT) is the preferred methodology for establishing causal inference about program impacts. But randomized experiments are not always feasible because of ethical, political, and/or practical considerations, so non-experimental…
Descriptors: Research Methodology, Research Design, Comparative Analysis, Replication (Evaluation)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Debray, Thomas P. A.; Moons, Karel G. M.; van Valkenhoef, Gert; Efthimiou, Orestis; Hummel, Noemi; Groenwold, Rolf H. H.; Reitsma, Johannes B. – Research Synthesis Methods, 2015
Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research…
Descriptors: Meta Analysis, Outcomes of Treatment, Research Methodology, Literature Reviews
Peer reviewed Peer reviewed
Direct linkDirect link
Kourea, Lefki; Lo, Ya-yu – International Journal of Research & Method in Education, 2016
Improving academic, behavioural, and social outcomes of students through empirical research has been a firm commitment among researchers, policy-makers, and other professionals in education across Europe and the United States (U.S.). To assist in building scientific evidences, executive bodies such as the European Commission and the Institute for…
Descriptors: Evidence Based Practice, Validity, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Michael J.; Bloom, Howard S.; Verbitsky-Savitz, Natalya; Gupta, Himani; Vigil, Alma E.; Cullinan, Daniel N. – Journal of Research on Educational Effectiveness, 2017
Multisite trials, in which individuals are randomly assigned to alternative treatment arms within sites, offer an excellent opportunity to estimate the cross-site average effect of treatment assignment (intent to treat or ITT) "and" the amount by which this impact varies across sites. Although both of these statistics are substantively…
Descriptors: Randomized Controlled Trials, Evidence, Models, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Descriptors: Site Selection, Randomized Controlled Trials, Educational Experiments, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Crissinger, Bryan R. – Journal of Statistics Education, 2015
Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…
Descriptors: Undergraduate Students, Statistics, Homework, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Norris, Susan L.; Moher, David; Reeves, Barnaby C.; Shea, Beverley; Loke, Yoon; Garner, Sarah; Anderson, Laurie; Tugwell, Peter; Wells, George – Research Synthesis Methods, 2013
Background: Selective outcome and analysis reporting (SOR and SAR) occur when only a subset of outcomes measured and analyzed in a study is fully reported, and are an important source of potential bias. Key methodological issues: We describe what is known about the prevalence and effects of SOR and SAR in both randomized controlled trials (RCTs)…
Descriptors: Health Services, Intervention, Outcomes of Treatment, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Buchanan, Taylor L.; Lohse, Keith R. – Measurement in Physical Education and Exercise Science, 2016
We surveyed researchers in the health and exercise sciences to explore different areas and magnitudes of bias in researchers' decision making. Participants were presented with scenarios (testing a central hypothesis with p = 0.06 or p = 0.04) in a random order and surveyed about what they would do in each scenario. Participants showed significant…
Descriptors: Researchers, Attitudes, Statistical Significance, Bias
Previous Page | Next Page »
Pages: 1  |  2