NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Westine, Carl D. – American Journal of Evaluation, 2016
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Descriptors: Efficiency, Evaluation Methods, Science Achievement, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Westine, Carl D. – Society for Research on Educational Effectiveness, 2015
A cluster-randomized trial (CRT) relies on random assignment of intact clusters to treatment conditions, such as classrooms or schools (Raudenbush & Bryk, 2002). One specific type of CRT, a multi-site CRT (MSCRT), is commonly employed in educational research and evaluation studies (Spybrook & Raudenbush, 2009; Spybrook, 2014; Bloom,…
Descriptors: Correlation, Randomized Controlled Trials, Science Achievement, Cluster Grouping
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Feldman, Betsy J.; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2012
In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…
Descriptors: Dropouts, Academic Achievement, Longitudinal Studies, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Matthew S.; Jenkins, Frank – ETS Research Report Series, 2005
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, National Competency Tests, Sampling