NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2013
To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…
Descriptors: Probability, Nonparametric Statistics, Goodness of Fit, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Yu, Feng; Zhang, Yanwei – Applied Psychological Measurement, 2011
DETECT is a nonparametric methodology to identify the dimensional structure underlying test data. The associated DETECT index, "D[subscript max]," denotes the degree of multidimensionality in data. Conditional covariances (CCOV) are the building blocks of this index. In specifying population CCOVs, the latent test composite [theta][subscript TT]…
Descriptors: Nonparametric Statistics, Statistical Analysis, Tests, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Monahan, Patrick O.; Ankenmann, Robert D. – Applied Psychological Measurement, 2010
When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…
Descriptors: Error of Measurement, Item Response Theory, Test Bias, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Zhongmin; Kolen, Michael J. – Applied Psychological Measurement, 2008
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
Descriptors: Test Length, Test Content, Simulation, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M. – Applied Psychological Measurement, 2008
Person-fit methods are used to uncover atypical test performance as reflected in the pattern of scores on individual items in a test. Unlike parametric person-fit statistics, nonparametric person-fit statistics do not require fitting a parametric test theory model. This study investigates the effectiveness of generalizations of nonparametric…
Descriptors: Simulation, Nonparametric Statistics, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
Sijtsma, Klaas – Applied Psychological Measurement, 1998
Reviews developments in nonparametric item-response theory (NIRT), from its historic origins in item-response theory (IRT) and scale analysis to new theoretical results for practical test construction. Discusses theoretical results from NIRT often relevant to IRT. Contains 134 references. (SLD)
Descriptors: Item Response Theory, Nonparametric Statistics, Research Methodology, Scores
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Psychological Measurement, 2005
Differential item functioning (DIF) is an important consideration in assessing the validity of test scores (Camilli & Shepard, 1994). A variety of statistical procedures have been developed to assess DIF in tests of dichotomous (Hills, 1989; Millsap & Everson, 1993) and polytomous (Penfield & Lam, 2000; Potenza & Dorans, 1995) items. Some of these…
Descriptors: Test Bias, Item Analysis, Psychological Studies, Evaluation Methods