NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
Krus, David J.; Ney, Robert G. – Educational and Psychological Measurement, 1978
An algorithm for item analysis in which item discrimination indices have been defined for the distractors as well as the correct answer is presented. Also, the concept of convergent and discriminant validity is applied to items instead of tests, and is discussed as an aid to item analysis. (Author/JKS)
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Test Items
Longford, Nicholas T. – 1994
This study is a critical evaluation of the roles for coding and scoring of missing responses to multiple-choice items in educational tests. The focus is on tests in which the test-takers have little or no motivation; in such tests omitting and not reaching (as classified by the currently adopted operational rules) is quite frequent. Data from the…
Descriptors: Algorithms, Classification, Coding, Models
Siskind, Theresa G.; Anderson, Lorin W. – 1982
The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Quality Control
Choppin, Bruce – 1982
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Descriptors: Ability, Algorithms, Guessing (Tests), Item Analysis
Roid, Gale H.; And Others – 1980
An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…
Descriptors: Algorithms, Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level
Roid, Gale; Finn, Patrick – 1978
The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…
Descriptors: Algorithms, Criterion Referenced Tests, Difficulty Level, Form Classes (Languages)
PDF pending restoration PDF pending restoration
Roid, Gale; And Others – 1978
Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…
Descriptors: Algorithms, Criterion Referenced Tests, High Schools, Higher Education
Roid, Gale; Haladyna, Tom – 1978
The technology of transforming sentences from prose instruction into test questions was examined by comparing two methods of selecting sentences (keyword vs. rare singleton), two types of question words (nouns vs. adjectives), and two foil construction methods (writer's choice vs. algorithmic). Four item writers created items using each…
Descriptors: Algorithms, Cloze Procedure, Comparative Analysis, Criterion Referenced Tests
PDF pending restoration PDF pending restoration
Vale, C. David; Weiss, David J. – 1977
Twenty multiple-choice vocabulary items and 20 free-response vocabulary items were administered to 660 college students. The free-response items consisted of the stem words of the multiple-choice items. Testees were asked to respond to the free-response items with synonyms. A computer algorithm was developed to transform the numerous…
Descriptors: Ability, Adaptive Testing, Algorithms, Aptitude Tests