NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2023
Multiple imputation (MI) is a popular method for handling missing data. In education research, it can be challenging to use MI because the data often have a clustered structure that need to be accommodated during MI. Although much research has considered applications of MI in hierarchical data, little is known about its use in cross-classified…
Descriptors: Educational Research, Data Analysis, Error of Measurement, Computation
Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2015
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Pretests Posttests, Scores
Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A. – Grantee Submission, 2015
Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test--post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test…
Descriptors: Structural Equation Models, Hierarchical Linear Modeling, Intervention, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako – Journal of Research on Educational Effectiveness, 2012
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Descriptors: Program Evaluation, Statistical Analysis, Hierarchical Linear Modeling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Heather D.; Morris, Pamela A.; Castells, Nina; Walker, Jessica Thornton – Journal of Policy Analysis and Management, 2011
This study uses data from an experimental employment program and instrumental variables (IV) estimation to examine the effects of maternal job loss on child classroom behavior. Random assignment to the treatment at one of three program sites is an exogenous predictor of employment patterns. Cross-site variation in treatment-control differences is…
Descriptors: Student Behavior, Employment Level, Social Behavior, Employment Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement