NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pommerich, Mary; Segall, Daniel O. – Journal of Educational Measurement, 2008
The accuracy of CAT scores can be negatively affected by local dependence if the CAT utilizes parameters that are misspecified due to the presence of local dependence and/or fails to control for local dependence in responses during the administration stage. This article evaluates the existence and effect of local dependence in a test of…
Descriptors: Simulation, Computer Assisted Testing, Mathematics Tests, Scores
Peer reviewed Peer reviewed
Birenbaum, Menucha; Tatsuoka, Kikumi K. – Journal of Educational Measurement, 1987
The present study examined the effect of three modes of feedback on the seriousness of error types committed on a post-test. The effect of the feedback made on post-test errors was found to be differential and dependent upon the seriousness of errors committed on the pre-test. (Author/LMO)
Descriptors: Computer Assisted Testing, Error Patterns, Feedback, Junior High Schools
Peer reviewed Peer reviewed
Stocking, Martha L.; Jirele, Thomas; Lewis, Charles; Swanson, Len – Journal of Educational Measurement, 1998
Constructed a pool of items from operational tests of mathematics to investigate the feasibility of using automated-test-assembly (ATA) methods to moderate simultaneously possibly irrelevant differences between the performance of women and men and African-American and White test takers. Discusses the usefulness of ATA. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Peer reviewed Peer reviewed
Luecht, Richard M. – Journal of Educational Measurement, 1998
Comments on the application of a proposed automated test assembly (ATA) to the problem of reducing potential performance differential among population subgroups and points out some pitfalls. Presents a rejoinder by M. Stocking and others. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Peer reviewed Peer reviewed
Tatsuoka, Kikumi K. – Journal of Educational Measurement, 1987
This study examined whether the item response curves from a two-parameter model reflected characteristics of the mathematics items, each of which required unique cognitive tasks. A computer program performed error analysis of test performance. Cognitive subtasks appeared to influence the slopes and difficulties of item response curves. (GDC)
Descriptors: Cognitive Processes, Computer Assisted Testing, Error Patterns, Item Analysis
Peer reviewed Peer reviewed
Bridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
Peer reviewed Peer reviewed
Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level