NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Spray, Judith A.; Reckase, Mark D. – Journal of Educational and Behavioral Statistics, 1996
Two procedures for classifying examinees into categories, one based on the sequential probability ratio test (SPRT) and the other on sequential Bayes methodology, were compared to determine which required fewer items for classification. Results showed that the SPRT procedure requires fewer items to achieve the same accuracy level. (SLD)
Descriptors: Ability, Bayesian Statistics, Classification, Comparative Analysis
PDF pending restoration PDF pending restoration
Faggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Bergstrom, Betty A.; Gershon, Richard – 1992
The most useful method of item selection for making pass-fail decisions with a Computerized Adaptive Test (CAT) was studied. Medical technology students (n=86) took a computer adaptive test in which items were targeted to the ability of the examinee. The adaptive algorithm that selected items and estimated person measures used the Rasch model and…
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Bergstrom, Betty A.; Lunz, Mary E. – Evaluation and the Health Professions, 1992
The level of confidence in pass/fail decisions obtained with computerized adaptive tests and paper-and-pencil tests was greater for 645 medical technology students when the computer adaptive test implemented a 90 percent confidence stopping rule than for paper-and-pencil tests of comparable length. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Confidence Testing
Peer reviewed Peer reviewed
Direct linkDirect link
de Champlain, Andre F.; Winward, Marcia L.; Dillon, Gerard F.; de Champlain, Judy E. – Educational Measurement: Issues and Practice, 2004
The purpose of this article was to model United States Medical Licensing Examination (USMLE) Step 2 passing rates using the Cox Proportional Hazards Model, best known for its application in analyzing clinical trial data. The number of months it took to pass the computer-based Step 2 examination was treated as the dependent variable in the model.…
Descriptors: Data Analysis, Certification, School Location, Medical Schools
Peer reviewed Peer reviewed
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S. – Applied Measurement in Education, 1997
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Descriptors: Algorithms, Computer Assisted Testing, Computer Simulation, Evaluators