Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 19 |
Descriptor
Evaluation Methods | 30 |
Models | 27 |
Item Response Theory | 23 |
Simulation | 9 |
Computation | 7 |
Test Items | 6 |
Measurement | 5 |
Measurement Techniques | 5 |
Comparative Analysis | 4 |
Goodness of Fit | 4 |
Psychology | 4 |
More ▼ |
Source
Applied Psychological… | 30 |
Author
Ferrando, Pere J. | 2 |
Roberts, James S. | 2 |
Sijtsma, Klaas | 2 |
Woods, Carol M. | 2 |
Attali, Yigal | 1 |
Bargmann, Jens | 1 |
Bejar, Isaac I. | 1 |
Berger, Martijn P. F. | 1 |
Bolt, Daniel M. | 1 |
Bryant, Damon | 1 |
Dauvier, Bruno | 1 |
More ▼ |
Publication Type
Journal Articles | 30 |
Reports - Evaluative | 14 |
Reports - Research | 10 |
Reports - Descriptive | 5 |
Book/Product Reviews | 1 |
Information Analyses | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 2 |
Adult Education | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 1 |
Hidden Figures Test | 1 |
What Works Clearinghouse Rating
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2012
In the typical application of a cognitive diagnosis model, the Q-matrix, which reflects the theory with respect to the skills indicated by the items, is assumed to be known. However, the Q-matrix is usually determined by expert judgment, and so there can be uncertainty about some of its elements. Here it is shown that this uncertainty can be…
Descriptors: Bayesian Statistics, Item Response Theory, Simulation, Models
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R. – Applied Psychological Measurement, 2012
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
Descriptors: Simulation, Measures (Individuals), Program Effectiveness, Item Response Theory
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas – Applied Psychological Measurement, 2011
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
Descriptors: Simulation, Reliability, Measurement, Psychology
Bryant, Damon; Davis, Larry – Applied Psychological Measurement, 2011
This brief technical note describes how to construct item vector plots for dichotomously scored items fitting the multidimensional three-parameter logistic model (M3PLM). As multidimensional item response theory (MIRT) shows promise of being a very useful framework in the test development life cycle, graphical tools that facilitate understanding…
Descriptors: Visual Aids, Item Response Theory, Evaluation Methods, Test Preparation
Ferrando, Pere J. – Applied Psychological Measurement, 2011
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
Descriptors: Factor Analysis, Models, Individual Differences, Responses
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M. – Applied Psychological Measurement, 2011
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
Descriptors: Intervals, Item Response Theory, Models, Evaluation Methods
Woods, Carol M.; Grimm, Kevin J. – Applied Psychological Measurement, 2011
In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…
Descriptors: Test Bias, Testing, Interaction, Item Response Theory
Attali, Yigal – Applied Psychological Measurement, 2011
Recently, Attali and Powers investigated the usefulness of providing immediate feedback on the correctness of answers to constructed response questions and the opportunity to revise incorrect answers. This article introduces an item response theory (IRT) model for scoring revised responses to questions when several attempts are allowed. The model…
Descriptors: Feedback (Response), Item Response Theory, Models, Error Correction
Kreiner, Svend – Applied Psychological Measurement, 2011
To rule out the need for a two-parameter item response theory (IRT) model during item analysis by Rasch models, it is important to check the Rasch model's assumption that all items have the same item discrimination. Biserial and polyserial correlation coefficients measuring the association between items and restscores are often used in an informal…
Descriptors: Item Analysis, Correlation, Item Response Theory, Models
Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens – Applied Psychological Measurement, 2010
The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…
Descriptors: Simulation, Pretests Posttests, Regression (Statistics), Comparative Analysis
Rausch, Joseph R. – Applied Psychological Measurement, 2009
The investigation of change in factor structure over time can provide new opportunities for the development of theory in psychology. The method proposed to investigate change in intraindividual factor structure over time is an extension of P-technique factor analysis, in which the P-technique factor model is fit within relatively small windows of…
Descriptors: Monte Carlo Methods, Factor Structure, Factor Analysis, Item Response Theory
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J. – Applied Psychological Measurement, 2007
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
Descriptors: Scaling, Evaluation Methods, Item Response Theory, Comparative Analysis
Roberts, James S. – Applied Psychological Measurement, 2008
Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…
Descriptors: Item Response Theory, Goodness of Fit, Test Items, Models
Zhang, Bo; Walker, Cindy M. – Applied Psychological Measurement, 2008
The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…
Descriptors: Item Response Theory, Computation, Goodness of Fit, Test Items
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Previous Page | Next Page ยป
Pages: 1 | 2