NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Schumacker, Randall E.; Lunz, Mary E. – Journal of Outcome Measurement, 1997
Different chi-square statistics reported in many-faceted Rasch model analysis are presented and interpreted using facets of subjects, judges, sessions, topics, and tasks for 74 subjects. Chi-square statistics are useful for determining the significance and interaction effects of facets and identifying adjustments to subjects' calibrated logit…
Descriptors: Ability, Chi Square, Item Response Theory, Research Design
O'Neill, Thomas R.; Lunz, Mary E. – 1997
This paper illustrates a method to study rater severity across exam administrations. A multi-facet Rasch model defined the ratings as being dominated by four facets: examinee ability, rater severity, project difficulty, and task difficulty. Ten years of data from administrations of a histotechnology performance assessment were pooled and analyzed…
Descriptors: Ability, Comparative Analysis, Equated Scores, Interrater Reliability
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Lunz, Mary E. – 1997
This paper explains the multifacet technology for analyzing performance examinations and the fair average method of setting criterion standards. The multidimensional nature of performance examinations requires that multiple and often different facets elements of a candidate's examination form be accounted for in the analysis. After this is…
Descriptors: Ability, Computer Assisted Testing, Criteria, Educational Technology
Peer reviewed Peer reviewed
Lunz, Mary E.; And Others – Applied Psychological Measurement, 1992
The effects of reviewing items and altering responses on the efficiency of computerized adaptive tests and resultant ability estimates of the examinees were explored for medical technology students (220 students could and 492 students could not review and alter their responses). Data do not support disallowing review. (SLD)
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Stone, Gregory Ethan; Lunz, Mary E. – Applied Measurement in Education, 1994
Effects of reviewing items and altering responses on examinee ability estimates, test precision, test information, decision confidence, and pass/fail status were studied for 376 examinees taking 2 certification tests. Test precision is only slightly affected by review, and average information loss can be recovered by addition of one item. (SLD)
Descriptors: Ability, Adaptive Testing, Certification, Change
O'Neill, Thomas R.; Lunz, Mary E. – 1996
To generalize test results beyond the particular test administration, an examinee's ability estimate must be independent of the particular items attempted, and the item difficulty calibrations must be independent of the particular sample of people attempting the items. This stability is a key concept of the Rasch model, a latent trait model of…
Descriptors: Ability, Benchmarking, Comparative Analysis, Difficulty Level