NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
O'Neill, Thomas R.; Gregg, Justin L.; Peabody, Michael R. – Applied Measurement in Education, 2020
This study addresses equating issues with varying sample sizes using the Rasch model by examining how sample size affects the stability of item calibrations and person ability estimates. A resampling design was used to create 9 sample size conditions (200, 100, 50, 45, 40, 35, 30, 25, and 20), each replicated 10 times. Items were recalibrated…
Descriptors: Sample Size, Equated Scores, Item Response Theory, Raw Scores
Peer reviewed Peer reviewed
Feldt, Leonard S.; Qualls, Audrey L. – Applied Measurement in Education, 1998
Two relatively simple methods for estimating the condition standard error of measurement (SEM) for nonlinearly derived score scales are proposed. Applications indicate that these two procedures produce fairly consistent estimates that tend to peak near the high end of the scale and reach a minimum in the middle of the raw score scale. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Raw Scores, Reliability
Peer reviewed Peer reviewed
Han, Tianqi; And Others – Applied Measurement in Education, 1997
Stability among equating procedures was studied by comparing item response theory (IRT) true-score equating with IRT observed-score equating, IRT true-score equating with equipercentile equating, and IRT observed-score equating with equipercentile equating. On average, IRT true-score equating more frequently produced more stable conversions. (SLD)
Descriptors: Comparative Analysis, Equated Scores, Item Response Theory, Raw Scores