NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fitzpatrick, Anne R. – Educational Measurement: Issues and Practice, 2008
Examined in this study were the effects of reducing anchor test length on student proficiency rates for 12 multiple-choice tests administered in an annual, large-scale, high-stakes assessment. The anchor tests contained 15 items, 10 items, or five items. Five content representative samples of items were drawn at each anchor test length from a…
Descriptors: Test Length, Multiple Choice Tests, Item Sampling, Student Evaluation
Shoemaker, David M. – 1970
A norm distribution consisting of test scores received by 810 college students on a 150 item dichotomously-scored four-alternative multiple-choice test was empirically estimated through several item-examinee sampling procedures. The post mortem item-sampling investigation was specifically designed to manipulate systematically the variables of…
Descriptors: Item Sampling, Multiple Choice Tests, National Norms, Norms
Peer reviewed Peer reviewed
Taylor, Annette Kujawski – College Student Journal, 2005
This research examined 2 elements of multiple-choice test construction, balancing the key and optimal number of options. In Experiment 1 the 3 conditions included a balanced key, overrepresentation of a and b responses, and overrepresentation of c and d responses. The results showed that error-patterns were independent of the key, reflecting…
Descriptors: Comparative Analysis, Test Items, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Reilly, Richard R.; Jackson, Rex – Journal of Educational Measurement, 1973
The present study suggests that although the reliability of an academic aptitude test given under formula-score condition can be increased substantially through empirical option weighting, much of the increase is due to the capitalization of the keying procedure on omitting tendencies which are reliable but not valid. (Author)
Descriptors: Aptitude Tests, Correlation, Factor Analysis, Item Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level
Poggio, John P.; Glasnapp, Douglas R. – 1973
The present research was initiated to investigate whether item-sampling as a procedure would yield a more accurate and stable index of student achievement during formative evaluation when compared to indices arrived at by the traditional method of assessing pupil knowledge and understandings within the framework of multiple choice testing for…
Descriptors: Achievement Tests, Course Content, Educational Objectives, Formative Evaluation
PDF pending restoration PDF pending restoration
Hughes, Francis P. – 1979
In an examination designed by a medical specialty board, six tests were administered to groups of candidates for certification to investigate the feasibility of using videotapes of doctor-patient interactions to assess candidates' ability to note signs of potential physical disorders during a patient's physical examination and history taking, and…
Descriptors: Decision Making Skills, Educational Assessment, Equated Scores, Graduate Medical Students