NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee – Online Submission, 2009
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Adaptive Testing, Scoring
Segall, Daniel O. – 1999
Two new methods for improving the measurement precision of a general test factor are proposed and evaluated. One new method provides a multidimensional item response theory estimate obtained from conventional administrations of multiple-choice test items that span general and nuisance dimensions. The other method chooses items adaptively to…
Descriptors: Ability, Adaptive Testing, Item Response Theory, Measurement Techniques
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Bayroff, A.G.; And Others – 1974
This report describes an automated system for administering, scoring, and recording results of multiple-choice tests. The system consists of examinee station, proctor station, and central computer; the report describes the equipment and the programing characteristics of the respective components. The system is designed for tests tailored to the…
Descriptors: Ability, Adaptive Testing, Computer Programs, Data Processing
PDF pending restoration PDF pending restoration
Civil Service Commission, Washington, DC. Personnel Research and Development Center. – 1976
This pamphlet reprints three papers and an invited discussion of them, read at a Division 5 Symposium at the 1975 American Psychological Association Convention. The first paper describes a Bayesian tailored testing process and shows how it demonstrates the importance of using test items with high discrimination, low guessing probability, and a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Oriented Programs, Computer Programs
Weiss, David J. – 1976
Three and one-half years of research on computerized ability testing are summarized. The original objectives of the research were: (1) to develop and implement the stratified computer-based ability test; (2) to compare, on psychometric criteria, the various approaches to computer-based ability testing, including the stratified computerized test,…
Descriptors: Adaptive Testing, Bayesian Statistics, Branching, Comparative Analysis