NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S. – Applied Psychological Measurement, 2008
Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…
Descriptors: Item Response Theory, Goodness of Fit, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Johnson, Timothy R. – Applied Psychological Measurement, 2009
A multidimensional item response theory model that accounts for response style factors is presented. The model, a multidimensional extension of Bock's nominal response model, is shown to allow for the study and control of response style effects in ordered rating scale data so as to reduce bias in measurement of the intended trait. In the current…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Hyun Sook; Kim, Seonghoon; Brennan, Robert L. – Applied Psychological Measurement, 2007
Large-scale testing programs involving classification decisions typically have multiple forms available and conduct equating to ensure cut-score comparability across forms. A test developer might be interested in the extent to which an examinee who happens to take a particular form would have a consistent classification decision if he or she had…
Descriptors: Classification, Reliability, Indexes, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Hanson, Bradley A.; Brennan, Robert L. – Applied Psychological Measurement, 2002
This article describes procedures for estimating various indices of classification consistency and accuracy for multiple category classifications using data from a single test administration. The estimates of the classification consistency and accuracy indices are compared under three different psychometric models: the two-parameter beta binomial,…
Descriptors: Classification, True Scores, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Janssen, Rianne; De Boeck, Paul – Applied Psychological Measurement, 1997
Used the multicomponent latent trait model (MTLM) (S. Embretson, 1980, 1984) for 3 different synonym tasks completed by 212 and 257 Belgian secondary school students. Developed a heuristic evaluation procedure that tested features of the model and provided an explanation for why the MTLM did not fit the data. (SLD)
Descriptors: Componential Analysis, Foreign Countries, Heuristics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zinbarg, Richard E.; Yovel, Iftah; Revelle, William; McDonald, Roderick P. – Applied Psychological Measurement, 2006
The extent to which a scale score generalizes to a latent variable common to all of the scale's indicators is indexed by the scale's general factor saturation. Seven techniques for estimating this parameter--omega[hierarchical] (omega[subscript h])--are compared in a series of simulated data sets. Primary comparisons were based on 160 artificial…
Descriptors: Computation, Factor Analysis, Reliability, Correlation
Peer reviewed Peer reviewed
Andrich, David; Luo, Guanzhong – Applied Psychological Measurement, 1993
A unidimensional model for responses to statements that have an unfolding structure was constructed from the cumulative Rasch model for ordered response categories. A joint maximum likelihood estimation procedure was investigated. Analyses of data from a small simulation and a real data set show that the model is readily applicable. (SLD)
Descriptors: Attitude Measures, Data Collection, Equations (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J. – Applied Psychological Measurement, 2004
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Descriptors: Personality Traits, Personality Assessment, Measurement Techniques, Evaluation Methods
Peer reviewed Peer reviewed
May, Kim – Applied Psychological Measurement, 1993
This book uses concepts familiar to those with a working knowledge of basic statistics and classical test theory to present item response theory models that are currently in wide use in both practical testing and research. The book would serve as a good entry-level graduate text in psychometrics and measurement methods. (SLD)
Descriptors: Educational Research, Graduate Study, Item Response Theory, Measurement Techniques
Peer reviewed Peer reviewed
Chang, Lei – Applied Psychological Measurement, 1994
Reliability and validity of 4-point and 6-point scales were assessed using a new model-based approach to fit empirical data from 165 graduate students completing an attitude measure. Results suggest that the issue of four- versus six-point scales may depend on the empirical setting. (SLD)
Descriptors: Attitude Measures, Goodness of Fit, Graduate Students, Graduate Study
Peer reviewed Peer reviewed
Bejar, Isaac I.; Yocom, Peter – Applied Psychological Measurement, 1991
An approach to test modeling is illustrated that encompasses both response consistency and response difficulty. This generative approach makes validation an ongoing process. An analysis of hidden figure items with 60 high school students supports the feasibility of the method. (SLD)
Descriptors: Construct Validity, Difficulty Level, Evaluation Methods, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition