NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)82
Source
Applied Psychological…128
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 128 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David – Applied Psychological Measurement, 2013
This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…
Descriptors: Item Response Theory, Models, Statistical Analysis, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Houts, Carrie R.; Edwards, Michael C. – Applied Psychological Measurement, 2013
The violation of the assumption of local independence when applying item response theory (IRT) models has been shown to have a negative impact on all estimates obtained from the given model. Numerous indices and statistics have been proposed to aid analysts in the detection of local dependence (LD). A Monte Carlo study was conducted to evaluate…
Descriptors: Item Response Theory, Psychological Evaluation, Data, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W. – Applied Psychological Measurement, 2013
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Lissitz, Robert W. – Applied Psychological Measurement, 2012
To address the lack of attention to construct shift in item response theory (IRT) vertical scaling, a multigroup, bifactor model was proposed to model the common dimension for all grades and the grade-specific dimensions. Bifactor model estimation accuracy was evaluated through a simulation study with manipulated factors of percentage of common…
Descriptors: Item Response Theory, Scaling, Models, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wei; Tay, Louis; Drasgow, Fritz – Applied Psychological Measurement, 2013
There has been growing use of ideal point models to develop scales measuring important psychological constructs. For meaningful comparisons across groups, it is important to identify items on such scales that exhibit differential item functioning (DIF). In this study, the authors examined several methods for assessing DIF on polytomous items…
Descriptors: Test Bias, Effect Size, Item Response Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J. – Applied Psychological Measurement, 2011
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
Descriptors: Factor Analysis, Models, Individual Differences, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Hotchkiss, Lawrence – Applied Psychological Measurement, 2012
The PROC NLMIXED procedure in Statistical Analysis System can be used to estimate parameters of item response theory (IRT) models. The data for this procedure are set up in a particular format called the "long format." The long format takes a substantial amount of time to execute the program. This article describes a format called the "wide…
Descriptors: Item Response Theory, Models, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Chia-Yi – Applied Psychological Measurement, 2013
Most methods for fitting cognitive diagnosis models to educational test data and assigning examinees to proficiency classes require the Q-matrix that associates each item in a test with the cognitive skills (attributes) needed to answer it correctly. In most cases, the Q-matrix is not known but is constructed from the (fallible) judgments of…
Descriptors: Cognitive Tests, Diagnostic Tests, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Svetina, Dubravka; Levy, Roy – Applied Psychological Measurement, 2012
An overview of popular software packages for conducting dimensionality assessment in multidimensional models is presented. Specifically, five popular software packages are described in terms of their capabilities to conduct dimensionality assessment with respect to the nature of analysis (exploratory or confirmatory), types of data (dichotomous,…
Descriptors: Computer Software, Item Response Theory, Models, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Applied Psychological Measurement, 2011
Recently, Attali and Powers investigated the usefulness of providing immediate feedback on the correctness of answers to constructed response questions and the opportunity to revise incorrect answers. This article introduces an item response theory (IRT) model for scoring revised responses to questions when several attempts are allowed. The model…
Descriptors: Feedback (Response), Item Response Theory, Models, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Applied Psychological Measurement, 2011
It is shown how the time limit on a test can be set to control the probability of a test taker running out of time before completing it. The probability is derived from the item parameters in the lognormal model for response times. Examples of curves representing the probability of running out of time on a test with given parameters as a function…
Descriptors: Testing, Timed Tests, Models, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2012
In the typical application of a cognitive diagnosis model, the Q-matrix, which reflects the theory with respect to the skills indicated by the items, is assumed to be known. However, the Q-matrix is usually determined by expert judgment, and so there can be uncertainty about some of its elements. Here it is shown that this uncertainty can be…
Descriptors: Bayesian Statistics, Item Response Theory, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Thissen, David – Applied Psychological Measurement, 2012
Local dependence (LD) refers to the violation of the local independence assumption of most item response models. Statistics that indicate LD between a pair of items on a test or questionnaire that is being fitted with an item response model can play a useful diagnostic role in applications of item response theory. In this article, a new score test…
Descriptors: Item Response Theory, Statistical Analysis, Models, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W. – Applied Psychological Measurement, 2012
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
Descriptors: Item Response Theory, Multiple Regression Analysis, Error of Measurement, Models
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9