NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Lunz, Mary E.; And Others – 1989
This paper describes and illustrates a method for equating examinations with multiple facets (i.e., items, examinees, judges, tasks, and rating scales). The data are from the practical section of two histotechnology certification examinations. The first practical examination involved 210 examinees, 14 judges, 15 slides, 3 tasks, and 2 rating…
Descriptors: Difficulty Level, Equated Scores, Latent Trait Theory, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Lunz, Mary E.; And Others – Applied Measurement in Education, 1990
An extension of the Rasch model is used to obtain objective measurements for examinations graded by judges. The model calibrates elements of each facet of the examination on a common log-linear scale. Real examination data illustrate the way correcting for judge severity improves fairness of examinee measures. (SLD)
Descriptors: Certification, Difficulty Level, Interrater Reliability, Judges
Peer reviewed Peer reviewed
Lunz, Mary E.; Stahl, John A. – Teaching and Learning in Medicine, 1993
A discussion of multifacet Rasch model analysis describes the Rasch model and its assumptions, then presents an extension of the model to include a facet for the influence of examiner severity. The model is illustrated with an application to an oral examination administered by a medical specialty board. (Author/MSE)
Descriptors: Higher Education, Licensing Examinations (Professions), Medical Education, Models
Peer reviewed Peer reviewed
Lunz, Mary E.; Schumacker, Randall E. – Journal of Outcome Measurement, 1997
Results and interpretations of the data from a performance examination were compared for four methods of analysis for 74 medical specialty certification candidates: (1) traditional summary statistics; (2) inter-judge correlations; (3) generalizability theory; and (4) the multifaceted Rasch model. Advantages of the Rasch model are outlined. (SLD)
Descriptors: Comparative Analysis, Data Analysis, Generalizability Theory, Interrater Reliability
Peer reviewed Peer reviewed
Lunz, Mary E.; Stahl, John A. – Evaluation and the Health Professions, 1990
Examinations were analyzed using the Rasch model to determine differences in judge severity and grading period stringency for (1) essay examination (subjects were 12 judges and 32 examinees); (2) clinical examination (subjects were 18 judges and 217 examinees); and (3) oral examination (subjects were 46 judges and 270 examinees). (SLD)
Descriptors: Certification, Essay Tests, Evaluators, Examiners
Lunz, Mary E.; Bergstrom, Betty A. – 1995
The Board of Registry (BOR) certifies medical technologists and other laboratory personnel. The BOR has studied adaptive testing for over 6 years and now administers all 17 BOR certification examinations using computerized adaptive testing (CAT). This paper presents an overview of the major research efforts from 1989 to the present related to test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Decision Making, Equated Scores
Lunz, Mary E.; And Others – 1990
This study explores the test-retest consistency of computer adaptive tests of varying lengths. The testing model used was designed as a mastery model to determine whether an examinee's estimated ability level is above or below a pre-established criterion expressed in the metric (logits) of the calibrated item pool scale. The Rasch model was used…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The level of confidence in pass/fail decisions obtained with computer adaptive tests (CATs) was compared to decisions based on paper-and-pencil tests. Subjects included 645 medical technology students from 238 educational programs across the country. The tests used in this study constituted part of the subjects' review for the certification…
Descriptors: Adaptive Testing, Certification, Comparative Testing, Computer Assisted Testing
Lunz, Mary E.; And Others – 1989
A method for understanding and controlling the multiple facets of an oral examination (OE) or other judge-intermediated examination is presented and illustrated. This study focused on determining the extent to which the facets model (FM) analysis constructs meaningful variables for each facet of an OE involving protocols, examiners, and…
Descriptors: Computer Software, Difficulty Level, Evaluators, Examiners