NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
United States Medical…1
What Works Clearinghouse Rating
Showing all 15 results Save | Export
John E. Richey – ProQuest LLC, 2021
Objective: To examine whether student use of AHIMA VLab™ in their academic programs impacts the pass/fail outcomes on their first-attempt national certification exams. This is a four-year longitudinal study, spanning 2017-2020. Methods: Data were extracted from two separate databases: the AHIMA association management system (AMS) known as Aptify…
Descriptors: Health Education, Certification, Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Johnstone, Sally M. – Change: The Magazine of Higher Learning, 2021
When colleges and universities quickly moved their classes online last March, many faculty members gave students the option of pass/fail (P/F) grading. Usually P/F implies a student must reach the minimum passing grade of D to be awarded a P. This brings up the question of what passing a course with a D means. How much of the course material did…
Descriptors: Pass Fail Grading, Student Evaluation, Competency Based Education, Undergraduate Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Casey, Kevin – Journal of Learning Analytics, 2017
Learning analytics offers insights into student behaviour and the potential to detect poor performers before they fail exams. If the activity is primarily online (for example computer programming), a wealth of low-level data can be made available that allows unprecedented accuracy in predicting which students will pass or fail. In this paper, we…
Descriptors: Keyboarding (Data Entry), Educational Research, Data Collection, Data Analysis
Peer reviewed Peer reviewed
Spray, Judith A.; Reckase, Mark D. – Journal of Educational and Behavioral Statistics, 1996
Two procedures for classifying examinees into categories, one based on the sequential probability ratio test (SPRT) and the other on sequential Bayes methodology, were compared to determine which required fewer items for classification. Results showed that the SPRT procedure requires fewer items to achieve the same accuracy level. (SLD)
Descriptors: Ability, Bayesian Statistics, Classification, Comparative Analysis
PDF pending restoration PDF pending restoration
Faggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Peer reviewed Peer reviewed
Stone, Gregory Ethan; Lunz, Mary E. – Applied Measurement in Education, 1994
Effects of reviewing items and altering responses on examinee ability estimates, test precision, test information, decision confidence, and pass/fail status were studied for 376 examinees taking 2 certification tests. Test precision is only slightly affected by review, and average information loss can be recovered by addition of one item. (SLD)
Descriptors: Ability, Adaptive Testing, Certification, Change
Bergstrom, Betty A.; Gershon, Richard – 1992
The most useful method of item selection for making pass-fail decisions with a Computerized Adaptive Test (CAT) was studied. Medical technology students (n=86) took a computer adaptive test in which items were targeted to the ability of the examinee. The adaptive algorithm that selected items and estimated person measures used the Rasch model and…
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Bergstrom, Betty A.; Lunz, Mary E. – Evaluation and the Health Professions, 1992
The level of confidence in pass/fail decisions obtained with computerized adaptive tests and paper-and-pencil tests was greater for 645 medical technology students when the computer adaptive test implemented a 90 percent confidence stopping rule than for paper-and-pencil tests of comparable length. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Confidence Testing
Peer reviewed Peer reviewed
Direct linkDirect link
de Champlain, Andre F.; Winward, Marcia L.; Dillon, Gerard F.; de Champlain, Judy E. – Educational Measurement: Issues and Practice, 2004
The purpose of this article was to model United States Medical Licensing Examination (USMLE) Step 2 passing rates using the Cox Proportional Hazards Model, best known for its application in analyzing clinical trial data. The number of months it took to pass the computer-based Step 2 examination was treated as the dependent variable in the model.…
Descriptors: Data Analysis, Certification, School Location, Medical Schools
Lunz, Mary E.; And Others – 1991
This paper explores the effect of reviewing items and altering responses on the efficiency of computer adaptive tests (CATs) and the resultant ability measures of examinees. Subjects included 712 medical students: 220 subjects were randomly assigned to the review condition; 492 were randomly assigned to a review control condition (the usual CAT…
Descriptors: Academic Ability, Adaptive Testing, Certification, Comparative Testing
Singh, Amrik; Singha, H. S. – 1977
This group of readings describes the university examination system in India, and suggests improvements which can be implemented within the current framework of the system. The papers are grouped into six sections: (1) descriptions of the evolution of the undergraduate examinations in India and their present status; (2) discussions of the…
Descriptors: Computer Assisted Testing, Data Analysis, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S. – Applied Measurement in Education, 1997
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Descriptors: Algorithms, Computer Assisted Testing, Computer Simulation, Evaluators
Reshetar, Rosemary A.; And Others – 1992
This study examined performance of a simulated computerized adaptive test that was designed to help direct the development of a medical recertification examination. The item pool consisted of 229 single-best-answer items from a random sample of 3,000 examinees, calibrated using the two-parameter logistic model. Examinees' responses were known. For…
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Computer Simulation
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The level of confidence in pass/fail decisions obtained with computer adaptive tests (CATs) was compared to decisions based on paper-and-pencil tests. Subjects included 645 medical technology students from 238 educational programs across the country. The tests used in this study constituted part of the subjects' review for the certification…
Descriptors: Adaptive Testing, Certification, Comparative Testing, Computer Assisted Testing