NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Zhang, Wenmin – Journal of Educational and Behavioral Statistics, 2011
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Descriptors: Equated Scores, Error Patterns, Evaluation Research, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Foley, Brett Patrick – ProQuest LLC, 2010
The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…
Descriptors: Test Length, Sample Size, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Leighton, Jacqueline P. – Journal of Educational Measurement, 2009
In this article, we introduce a person-fit statistic called the hierarchy consistency index (HCI) to help detect misfitting item response vectors for tests developed and analyzed based on a cognitive model. The HCI ranges from -1.0 to 1.0, with values close to -1.0 indicating that students respond unexpectedly or differently from the responses…
Descriptors: Test Length, Simulation, Correlation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Cheung, Shu Fai; Chan, Darius K.-S. – Educational and Psychological Measurement, 2008
In meta-analysis, it is common to have dependent effect sizes, such as several effect sizes from the same sample but measured at different times. Cheung and Chan proposed the adjusted-individual and adjusted-weighted procedures to estimate the degree of dependence and incorporate this estimate in the meta-analysis. The present study extends the…
Descriptors: Effect Size, Academic Achievement, Meta Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin – Journal of Educational Measurement, 2006
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Descriptors: Item Analysis, Rating Scales, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Etherton, Joseph L.; Bianchini, Kevin J.; Ciota, Megan A.; Greve, Kevin W. – Assessment, 2005
Reliable Digit Span (RDS) is an indicator used to assess the validity of cognitive test performance. Scores of 7 or lower suggest poor effort or negative response bias. The possibility that RDS scores are also affected by pain has not been addressed thus potentially threatening RDS specificity. The current study used cold pressor-induced pain to…
Descriptors: Response Style (Tests), Simulation, Intelligence Tests, Pain