NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ligtvoet, Rudy; van der Ark, L. Andries; Bergsma, Wicher P.; Sijtsma, Klaas – Psychometrika, 2011
We propose three latent scales within the framework of nonparametric item response theory for polytomously scored items. Latent scales are models that imply an invariant item ordering, meaning that the order of the items is the same for each measurement value on the latent scale. This ordering property may be important in, for example,…
Descriptors: Intelligence Tests, Measures (Individuals), Methods, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Braeken, Johan – Psychometrika, 2011
Conditional independence is a fundamental principle in latent variable modeling and item response theory. Violations of this principle, commonly known as local item dependencies, are put in a test information perspective, and sharp bounds on these violations are defined. A modeling approach is proposed that makes use of a mixture representation of…
Descriptors: Test Construction, Item Response Theory, Models, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Maris, Gunter; van der Maas, Han – Psychometrika, 2012
Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…
Descriptors: Item Response Theory, Scoring, Reaction Time, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon – Psychometrika, 2012
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
Descriptors: Reliability, Item Response Theory, Tests, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jorg-Tobias – Psychometrika, 2012
Latent trait models for response times in tests have become popular recently. One challenge for response time modeling is the fact that the distribution of response times can differ considerably even in similar tests. In order to reduce the need for tailor-made models, a model is proposed that unifies two popular approaches to response time…
Descriptors: Reaction Time, Tests, Models, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Merkle, Edgar C.; Zeileis, Achim – Psychometrika, 2013
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Descriptors: Factor Analysis, Evaluation Methods, Tests, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Hooker, Giles – Psychometrika, 2010
This paper presents a study of the impact of prior structure on paradoxical results in multidimensional item response theory. Paradoxical results refer to the possibility that an incorrect response could be beneficial to an examinee. We demonstrate that when three or more ability dimensions are being used, paradoxical results can be induced by…
Descriptors: Item Response Theory, Correlation, Tests, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Haberman, Shelby J.; Sinharay, Sandip – Psychometrika, 2010
Recently, there has been increasing interest in reporting subscores. This paper examines reporting of subscores using multidimensional item response theory (MIRT) models (e.g., Reckase in "Appl. Psychol. Meas." 21:25-36, 1997; C.R. Rao and S. Sinharay (Eds), "Handbook of Statistics, vol. 26," pp. 607-642, North-Holland, Amsterdam, 2007; Beguin &…
Descriptors: Item Response Theory, Psychometrics, Statistical Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Molenaar, Dylan; Dolan, Conor V.; de Boeck, Paul – Psychometrika, 2012
The Graded Response Model (GRM; Samejima, "Estimation of ability using a response pattern of graded scores," Psychometric Monograph No. 17, Richmond, VA: The Psychometric Society, 1969) can be derived by assuming a linear regression of a continuous variable, Z, on the trait, [theta], to underlie the ordinal item scores (Takane & de Leeuw in…
Descriptors: Simulation, Regression (Statistics), Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ligtvoet, Rudy – Psychometrika, 2012
In practice, the sum of the item scores is often used as a basis for comparing subjects. For items that have more than two ordered score categories, only the partial credit model (PCM) and special cases of this model imply that the subjects are stochastically ordered on the common latent variable. However, the PCM is very restrictive with respect…
Descriptors: Simulation, Item Response Theory, Comparative Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan, Ke-Hai; Zhang, Zhiyong – Psychometrika, 2012
The paper develops a two-stage robust procedure for structural equation modeling (SEM) and an R package "rsem" to facilitate the use of the procedure by applied researchers. In the first stage, M-estimates of the saturated mean vector and covariance matrix of all variables are obtained. Those corresponding to the substantive variables…
Descriptors: Structural Equation Models, Tests, Federal Aid, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Holland, Paul W. – Psychometrika, 2010
The Non-Equivalent groups with Anchor Test (NEAT) design involves "missing data" that are "missing by design." Three nonlinear observed score equating methods used with a NEAT design are the "frequency estimation equipercentile equating" (FEEE), the "chain equipercentile equating" (CEE), and the "item-response-theory observed-score-equating" (IRT…
Descriptors: Equated Scores, Item Response Theory, Tests, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Draxler, Clemens – Psychometrika, 2010
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Descriptors: Statistical Analysis, Probability, Sample Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V. – Psychometrika, 2010
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Descriptors: Least Squares Statistics, Multiple Regression Analysis, Heuristics, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Psychometrika, 2012
The study of prediction bias is important and the last five decades include research studies that examined whether test scores differentially predict academic or employment performance. Previous studies used ordinary least squares (OLS) to assess whether groups differ in intercepts and slopes. This study shows that OLS yields inaccurate inferences…
Descriptors: Academic Achievement, Prediction, Measurement, Least Squares Statistics
Previous Page | Next Page ยป
Pages: 1  |  2