Descriptor
Algorithms | 2 |
Comparative Analysis | 2 |
Adaptive Testing | 1 |
Automation | 1 |
Computer Assisted Testing | 1 |
Item Response Theory | 1 |
Performance Based Assessment | 1 |
Physicians | 1 |
Regression (Statistics) | 1 |
Scoring | 1 |
Selection | 1 |
More ▼ |
Source
Journal of Educational… | 2 |
Author
Clauser, Brian E. | 1 |
Clyman, Stephen G. | 1 |
Green, Bert F. | 1 |
Margolis, Melissa J. | 1 |
Ross, Linette P. | 1 |
Schnipke, Deborah L. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory

Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P. – Journal of Educational Measurement, 1997
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
Descriptors: Algorithms, Automation, Comparative Analysis, Computer Assisted Testing