NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J. – Applied Psychological Measurement, 2009
Spearman's factor-analytic model has been proposed as a unidimensional linear item response theory (IRT) model for continuous item responses. This article first proposes a reexpression of the model that leads to a form similar to that of standard IRT models for binary responses and discusses the item indices of difficulty discrimination and…
Descriptors: Factor Analysis, Item Response Theory, Discriminant Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
van Barneveld, Christina – Applied Psychological Measurement, 2007
The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise's examinee persistence model) and were calibrated using a…
Descriptors: Test Construction, Item Response Theory, Models, Bayesian Statistics
Peer reviewed Peer reviewed
Seraphine, Anne E. – Applied Psychological Measurement, 2000
Examined the performance of DIMTEST, through simulation, for unidimensional and two-dimensional data that exhibited ceiling effects generated through changes in location and scale of the theta distribution. Results indicate that the power of DIMTEST is reduced as the location shifts upward and the scale shifts downward. Considers the selection…
Descriptors: Difficulty Level, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1998
Compared three methods for developing a common metric under item response theory through simulation. For smaller numbers of common items, linking using the characteristic curve method yielded smaller root mean square differences for both item discrimination and difficulty parameters. For larger numbers of common items, the three methods were…
Descriptors: Comparative Analysis, Difficulty Level, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Psychological Measurement, 1995
A statistic used by R. Meijer (1994) to determine person-fit referred to the number of errors from the deterministic Guttman model (L. Guttman, 1950), but this was, in fact, based on the number of errors from the deterministic Guttman model as defined by J. Loevinger (1947, 1948). (SLD)
Descriptors: Difficulty Level, Models, Responses, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Eggen, Theo J. H. M.; Verschoor, Angela J. – Applied Psychological Measurement, 2006
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model…
Descriptors: Adaptive Testing, Difficulty Level, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Ackerman, Terry A. – Applied Psychological Measurement, 1989
The characteristics of unidimensional ability estimates obtained from data generated using multidimensional compensatory models were compared with estimates from non-compensatory item response theory (IRT) models. The least squares matching procedures used represent a good method of matching the two multidimensional IRT models. (TJH)
Descriptors: Ability Identification, Computer Software, Difficulty Level, Estimation (Mathematics)
Peer reviewed Peer reviewed
Reckase, Mark D.; McKinley, Robert L. – Applied Psychological Measurement, 1991
The concept of item discrimination is generalized to the case in which more than one ability is required to determine the correct response to an item, using the conceptual framework of item response theory and the definition of multidimensional item difficulty previously developed by M. Reckase (1985). (SLD)
Descriptors: Ability, Definitions, Difficulty Level, Equations (Mathematics)
Peer reviewed Peer reviewed
Liou, Michelle – Applied Psychological Measurement, 1988
In applying I. I. Bejar's method for detecting the dimensionality of achievement tests, researchers should be cautious in interpreting the slope of the principal axis. Other information from the data is needed in conjunction with Bejar's method of addressing item dimensionality. (SLD)
Descriptors: Achievement Tests, Computer Simulation, Difficulty Level, Equated Scores
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Bejar, Isaac I.; Yocom, Peter – Applied Psychological Measurement, 1991
An approach to test modeling is illustrated that encompasses both response consistency and response difficulty. This generative approach makes validation an ongoing process. An analysis of hidden figure items with 60 high school students supports the feasibility of the method. (SLD)
Descriptors: Construct Validity, Difficulty Level, Evaluation Methods, High School Students