NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Peer reviewed Peer reviewed
Direct linkDirect link
Timothy Lycurgus; Ben B. Hansen – Society for Research on Educational Effectiveness, 2022
Background: Efficacy trials in education often possess a motivating theory of change: how and why should the desired improvement in outcomes occur as a consequence of the intervention? In scenarios with repeated measurements, certain subgroups may be more or less likely to manifest a treatment effect; the theory of change (TOC) provides guidance…
Descriptors: Educational Change, Educational Research, Intervention, Efficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ding, Peng; Feller, Avi; Miratrix, Luke – Society for Research on Educational Effectiveness, 2015
Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference. Linear models, for example, classically rely on constant treatment effect assumptions, or treatment effects defined by…
Descriptors: Causal Models, Randomized Controlled Trials, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Yeager, David; Iachan, Ronaldo – Society for Research on Educational Effectiveness, 2016
Questions regarding the generalizability of results from educational experiments have been at the forefront of methods development over the past five years. This work has focused on methods for estimating the effect of an intervention in a well-defined inference population (e.g., Tipton, 2013; O'Muircheartaigh and Hedges, 2014); methods for…
Descriptors: Behavioral Sciences, Behavioral Science Research, Intervention, Educational Experiments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…
Descriptors: Data Analysis, Student Attrition, Evaluation Methods, Evaluation Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Descriptors: Site Selection, Randomized Controlled Trials, Educational Experiments, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy – Society for Research on Educational Effectiveness, 2015
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Descriptors: Generalization, Program Effectiveness, Sample Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bell, Stephen H.; Puma, Michael J.; Cook, Ronna J.; Heid, Camilla A. – Society for Research on Educational Effectiveness, 2013
Access to Head Start has been shown to improve children's preschool experiences and school readiness on selected factors through the end of 1st grade. Two more years of follow-up, through the end of 3rd grade, can now be examined to determine whether these effects continue into the middle elementary grades. The statistical design and impact…
Descriptors: Evaluation Methods, Data Analysis, Randomized Controlled Trials, Sampling