NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Wolfe, John H. – Psychometrika, 1981
In tailored testing, it is important to determine the optimal difficulty of the next item to be presented to the examinee. It is shown that the difference that maximizes information for the three-parameter normal ogive response model is greater than the optimal difference for the three-parameter logistic model. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Measurement Techniques
Peer reviewed Peer reviewed
Nishisato, Shizuhiko; Sheu, Wen-Jenn – Psychometrika, 1980
A modification of the method of reciprocal averages for scaling multiple choice data is proposed. The proposed method handles the data in a piecewise fashion and allows for faster convergence to a solution. (Author/JKS)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Gilmer, Jerry S.; Feldt, Leonard S. – Psychometrika, 1983
Estimating the reliability of measures derived from separate questions on essay tests or individual judges on a rater panel is considered. Cronbach's alpha is shown to underestimate reliability in these cases. Some alternative coefficients are presented. (JKS)
Descriptors: Essay Tests, Item Analysis, Measurement Techniques, Rating Scales
Peer reviewed Peer reviewed
Rindskopf, David – Psychometrika, 1983
Various models have been proposed for analyzing dichotomous test or questionnaire items which were constructed to reflect an assumed underlying structure (e.g., hierarchical). This paper shows that many such models are special cases of latent class analysis and discusses a currently available computer program to analyze them. (Author/JKS)
Descriptors: Computer Programs, Item Analysis, Mathematical Models, Measurement Techniques
Peer reviewed Peer reviewed
Whitely, Susan E. – Psychometrika, 1980
A model is proposed for analyzing ability tests which utilizes a cognitive approach to understanding test items and latent trait analysis for quantifying subjects responses to test items. The model adopts a multicomponent perspective on the processes involved in answering test items. (JKS)
Descriptors: Cognitive Measurement, Information Processing, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Bentler, P. M.; Woodward, Arthur J. – Psychometrika, 1980
A chain of lower bound inequalities leading to the greatest lower bound to reliability is established for the internal consistency of a composite of unit-weighted scores (such as a test). Algorithms for obtaining various reliability coefficients are presented. (Author/JKS)
Descriptors: Factor Analysis, Item Analysis, Measurement Techniques, Test Construction
Peer reviewed Peer reviewed
Masters, Geoff N. – Psychometrika, 1982
An extension of the Rasch model for partial credit scoring of test items is presented. An unconditional maximum likelihood procedure for estimating the model parameters is developed. The relationship of this model to Andrich's Rating Scale model and Samejima's Graded Response model are discussed. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Maximum Likelihood Statistics, Measurement Techniques
Peer reviewed Peer reviewed
Austin, Joe Dan – Psychometrika, 1981
On distractor-identification tests students mark as many distractors as possible on each test item. A grading scale is developed for this type testing. The score is optimal in that it yields an unbiased estimate of the student's score as if no guessing had occurred. (Author/JKS)
Descriptors: Guessing (Tests), Item Analysis, Measurement Techniques, Scoring Formulas
Peer reviewed Peer reviewed
Wilcox, Rand R. – Psychometrika, 1983
A procedure for determining the reliability of an examinee knowing k out of n possible multiple choice items given his or her performance on those items is presented. Also, a scoring procedure for determining which items an examinee knows is presented. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Stegelmann, Werner – Psychometrika, 1983
The Rasch model is generalized to a multicomponent model, so that observations of component events are not needed to apply the model. It is shown that the generalized model maintains the property of specific objectivity of the Rasch model. An application to a mathematics test is provided. (Author/JKS)
Descriptors: Estimation (Mathematics), Item Analysis, Latent Trait Theory, Mathematical Models