NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Kim, YoungKoung – Journal of Educational Measurement, 2017
The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…
Descriptors: Error of Measurement, Scores, Comparative Analysis, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim – Educational Measurement: Issues and Practice, 2014
This module describes and extends X-to-Y regression measures that have been proposed for use in the assessment of X-to-Y scaling and equating results. Measures are developed that are similar to those based on prediction error in regression analyses but that are directly suited to interests in scaling and equating evaluations. The regression and…
Descriptors: Scaling, Regression (Statistics), Equated Scores, Comparative Analysis
Moses, Tim; Liu, Jinghua – Educational Testing Service, 2011
In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…
Descriptors: Equated Scores, Data Analysis, Scores, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Oh, Hyeonjoo; Moses, Tim – Journal of Educational Measurement, 2012
This study investigated differences between two approaches to chained equipercentile (CE) equating (one- and bi-direction CE equating) in nearly equal groups and relatively unequal groups. In one-direction CE equating, the new form is linked to the anchor in one sample of examinees and the anchor is linked to the reference form in the other…
Descriptors: Equated Scores, Statistical Analysis, Comparative Analysis, Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim; Kim, Sooyeon – ETS Research Report Series, 2007
This study evaluated the impact of unequal reliability on test equating methods in the nonequivalent groups with anchor test (NEAT) design. Classical true score-based models were compared in terms of their assumptions about how reliability impacts test scores. These models were related to treatment of population ability differences by different…
Descriptors: Reliability, Equated Scores, Test Items, Statistical Analysis