NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lathrop, Quinn N.; Cheng, Ying – Journal of Educational Measurement, 2014
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
Descriptors: Cutting Scores, Classification, Computation, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Journal of Educational Measurement, 2014
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
Descriptors: Educational Assessment, Test Validity, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Gierl, Mark J. – Journal of Educational Measurement, 2006
Inspection of differential item functioning (DIF) in translated test items can be informed by graphical comparisons of item response functions (IRFs) across translated forms. Due to the many forms of DIF that can emerge in such analyses, it is important to develop statistical tests that can confirm various characteristics of DIF when present.…
Descriptors: Regression (Statistics), Tests, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Leighton, Jacqueline P.; Tan, Xuan – Journal of Educational Measurement, 2006
DETECT, the acronym for Dimensionality Evaluation To Enumerate Contributing Traits, is an innovative and relatively new nonparametric dimensionality assessment procedure used to identify mutually exclusive, dimensionally homogeneous clusters of items using a genetic algorithm ( Zhang & Stout, 1999). Because the clusters of items are mutually…
Descriptors: Program Evaluation, Cluster Grouping, Evaluation Methods, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Roussos, Louis A.; Ozbek, Ozlem Yesim – Journal of Educational Measurement, 2006
The development of the DETECT procedure marked an important advancement in nonparametric dimensionality analysis. DETECT is the first nonparametric technique to estimate the number of dimensions in a data set, estimate an effect size for multidimensionality, and identify which dimension is predominantly measured by each item. The efficacy of…
Descriptors: Evaluation Methods, Effect Size, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R. – Journal of Educational Measurement, 2004
Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…
Descriptors: Probability, Adaptive Testing, Item Response Theory, Scores