NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)9
Since 2006 (last 20 years)15
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R. – American Journal of Evaluation, 2018
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…
Descriptors: Research Design, Randomized Controlled Trials, Experimental Groups, Control Groups
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Kelcey, Benjamin; Spybrook, Jessaca – Journal of Experimental Education, 2018
Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as children's characteristics (e.g., gender), teacher's background (e.g., years of teaching), and school's characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Intervention, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Taber, Keith S. – Studies in Science Education, 2019
Experimental studies are often employed to test the effectiveness of teaching innovations such as new pedagogy, curriculum, or learning resources. This article offers guidance on good practice in developing research designs, and in drawing conclusions from published reports. Random control trials potentially support the use of statistical…
Descriptors: Instructional Innovation, Educational Research, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Bates, B. T.; Dufek, J. S.; James, C. R.; Harry, J. R.; Eggleston, J. D. – Measurement in Physical Education and Exercise Science, 2016
We demonstrate the effect of sample and trial size on statistical outcomes for single-subject analyses (SSA) and group analyses (GA) for a frequently studied performance activity and common intervention. Fifty strides of walking data collected in two blocks of 25 trials for two shoe conditions were analyzed for samples of five, eight, 10, and 12…
Descriptors: Sample Size, Research Design, Statistical Analysis, Adults
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wong, Vivian C.; Steiner, Peter M. – Society for Research on Educational Effectiveness, 2015
Across the disciplines of economics, political science, public policy, and now, education, the randomized controlled trial (RCT) is the preferred methodology for establishing causal inference about program impacts. But randomized experiments are not always feasible because of ethical, political, and/or practical considerations, so non-experimental…
Descriptors: Research Methodology, Research Design, Comparative Analysis, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Kourea, Lefki; Lo, Ya-yu – International Journal of Research & Method in Education, 2016
Improving academic, behavioural, and social outcomes of students through empirical research has been a firm commitment among researchers, policy-makers, and other professionals in education across Europe and the United States (U.S.). To assist in building scientific evidences, executive bodies such as the European Commission and the Institute for…
Descriptors: Evidence Based Practice, Validity, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Barrera-Osorio, Felipe; Filmer, Deon; McIntyre, Joe – Society for Research on Educational Effectiveness, 2014
Randomized controlled trials (RCTs) and regression discontinuity (RD) studies both provide estimates of causal effects. A major difference between the two is that RD only estimates local average treatment effects (LATE) near the cutoff point of the forcing variable. This has been cited as a drawback to RD designs (Cook & Wong, 2008).…
Descriptors: Randomized Controlled Trials, Regression (Statistics), Research Problems, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen – American Journal of Evaluation, 2016
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Descriptors: Intervention, Multivariate Analysis, Mixed Methods Research, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Heppen, Jessica; Sorensen, Nicholas – Society for Research on Educational Effectiveness, 2014
The consequences of failing core academic courses during the first year of high school are dire. More students fail courses in ninth grade than in any other grade, and a disproportionate number of these students subsequently drop out (Herlihy, 2007). As shown in Chicago and elsewhere, academic performance in core courses during the first year of…
Descriptors: Algebra, Remedial Mathematics, Academic Failure, Credits
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eno, Jared; Heppen, Jessica – Society for Research on Educational Effectiveness, 2014
Algebra is considered a key gatekeeper for higher-level mathematics course-taking in high school and for college enrollment (Adelman, 2006; Gamoran & Hannigan, 2000). Yet, algebra pass rates are consistently low in many places (Higgins, 2008; Ham & Walker, 1999; Helfand, 2006), including Chicago Public Schools (CPS). This is of particular…
Descriptors: Algebra, Remedial Mathematics, Academic Failure, Credits