Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 8 |
Descriptor
Error of Measurement | 9 |
Intervention | 9 |
Statistical Inference | 9 |
Computation | 4 |
Effect Size | 4 |
Research Design | 4 |
Simulation | 4 |
Comparative Analysis | 3 |
Educational Research | 3 |
Evaluation Methods | 3 |
Regression (Statistics) | 3 |
More ▼ |
Source
National Center for Education… | 2 |
Grantee Submission | 1 |
Institute for Research on… | 1 |
Journal of Experimental… | 1 |
Journal of Research on… | 1 |
MDRC | 1 |
Studies in Second Language… | 1 |
What Works Clearinghouse | 1 |
Author
Bloom, Howard S. | 2 |
Deke, John | 2 |
Porter, Kristin E. | 2 |
Reardon, Sean F. | 2 |
Unlu, Fatih | 2 |
Adelle K. Sturgell | 1 |
Baek, Eunkyeng | 1 |
Cimpian, Joseph R. | 1 |
David A. Klingbeil | 1 |
Ethan R. Van Norman | 1 |
Finucane, Mariel | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 3 |
Guides - Non-Classroom | 2 |
Numerical/Quantitative Data | 2 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Kindergarten | 1 |
Primary Education | 1 |
Audience
Researchers | 1 |
Location
Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Baek, Eunkyeng; Luo, Wen; Henri, Maria – Journal of Experimental Education, 2022
It is common to include multiple dependent variables (DVs) in single-case experimental design (SCED) meta-analyses. However, statistical issues associated with multiple DVs in the multilevel modeling approach (i.e., possible dependency of error, heterogeneous treatment effects, and heterogeneous error structures) have not been fully investigated.…
Descriptors: Meta Analysis, Hierarchical Linear Modeling, Comparative Analysis, Statistical Inference
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Cimpian, Joseph R. – Journal of Research on Educational Effectiveness, 2017
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Intervention, Quasiexperimental Design, Simulation
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Robinson-Cimpian, Joseph P. – MDRC, 2014
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Research Design, Quasiexperimental Design, Research Methodology
Vanhove, Jan – Studies in Second Language Learning and Teaching, 2015
I discuss three common practices that obfuscate or invalidate the statistical analysis of randomized controlled interventions in applied linguistics. These are (a) checking whether randomization produced groups that are balanced on a number of possibly relevant covariates, (b) using repeated measures ANOVA to analyze pretest-posttest designs, and…
Descriptors: Randomized Controlled Trials, Intervention, Applied Linguistics, Statistical Analysis
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Wilde, Elizabeth Ty; Hollister, Robinson – Institute for Research on Poverty, 2002
In this study we test the performance of some nonexperimental estimators of impacts applied to an educational intervention--reduction in class size--where achievement test scores were the outcome. We compare the nonexperimental estimates of the impacts to "true impact" estimates provided by a random-assignment design used to assess the…
Descriptors: Computation, Outcome Measures, Achievement Tests, Scores