NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Cudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Jones, Douglas H. – 1985
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Descriptors: Error Patterns, Functions (Mathematics), Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
O'Brien, Michael L. – Studies in Educational Evaluation, 1986
A test score can be used for individual instructional diagnosis after determining whether: (1) difficulty of the test items was consistent with the complexity of the content measured; (2) items measuring the same underlying process were about equally difficult; and (3) partial credit scoring would increase the reliability of the diagnosis. (LMO)
Descriptors: Behavioral Objectives, Difficulty Level, Educational Diagnosis, Error Patterns
Webb, Noreen; Herman, Joan – 1984
This paper describes the development of a language arts test to assess the consistency of student response patterns and the feasibility of using the test to diagnose students' misconceptions. The studies were part of a project to develop computerized adaptive testing for the language arts with software to diagnose student errors. The…
Descriptors: Adaptive Testing, Computer Assisted Testing, Diagnostic Tests, Error Patterns