NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Bergstrom, Betty A.; Lunz, Mary E. – 1998
This paper addresses questions of whether positively- and negatively-worded items measure the same construct and whether the rating scale categories "strongly agree" to "strongly disagree" are used in the same way for both types of items. Item response theory (IRT), specifically the Andrich Rating Scale Model (B. Wright and G.…
Descriptors: Adults, Item Response Theory, Rating Scales, Research Methodology
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Lunz, Mary E.; Bergstrom, Betty A. – 1995
The Board of Registry (BOR) certifies medical technologists and other laboratory personnel. The BOR has studied adaptive testing for over 6 years and now administers all 17 BOR certification examinations using computerized adaptive testing (CAT). This paper presents an overview of the major research efforts from 1989 to the present related to test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Decision Making, Equated Scores
Peer reviewed Peer reviewed
Lunz, Mary E.; Bergstrom, Betty A. – Journal of Educational Measurement, 1994
The impact of computerized adaptive test (CAT) administration formats on student performance was studied with 645 medical technology students who also took a paper-and-pencil test. Analysis of covariance indicates no significant interactions among test administration formats and provides evidence for adjusting CAT test to more familiar modalities.…
Descriptors: Academic Achievement, Adaptive Testing, Analysis of Covariance, Computer Assisted Testing
Lunz, Mary E.; Stahl, John A. – 1990
Three examinations administered to medical students were analyzed to determine differences among severities of judges' assessments and among grading periods. The examinations included essay, clinical, and oral forms of the tests. Twelve judges graded the three essays for 32 examinees during a 4-day grading session, which was divided into eight…
Descriptors: Clinical Diagnosis, Comparative Testing, Difficulty Level, Essay Tests
Peer reviewed Peer reviewed
Bergstrom, Betty A.; Lunz, Mary E. – Evaluation and the Health Professions, 1992
The level of confidence in pass/fail decisions obtained with computerized adaptive tests and paper-and-pencil tests was greater for 645 medical technology students when the computer adaptive test implemented a 90 percent confidence stopping rule than for paper-and-pencil tests of comparable length. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Confidence Testing
Stone, Gregory Ethan; Lunz, Mary E. – 1994
This paper explores the comparability of item calibrations for three types of items: (1) text only; (2) text with photographs; and (3) text plus graphics when items are presented on written tests and computerized adaptive tests. Data are from five different medical technology certification examinations administered nationwide in 1993. The Rasch…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Diagrams
Lunz, Mary E.; And Others – 1989
A method for understanding and controlling the multiple facets of an oral examination (OE) or other judge-intermediated examination is presented and illustrated. This study focused on determining the extent to which the facets model (FM) analysis constructs meaningful variables for each facet of an OE involving protocols, examiners, and…
Descriptors: Computer Software, Difficulty Level, Evaluators, Examiners