Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Comparative Testing | 12 |
| Scoring | 12 |
| Test Items | 12 |
| Multiple Choice Tests | 7 |
| Higher Education | 5 |
| Test Reliability | 5 |
| Achievement Tests | 4 |
| College Students | 4 |
| Computer Assisted Testing | 4 |
| Test Format | 4 |
| High School Students | 3 |
| More ▼ | |
Source
| Applied Psychological… | 2 |
| Journal of Educational… | 2 |
| Educational Measurement:… | 1 |
| Evaluation and the Health… | 1 |
| Journal of Applied Testing… | 1 |
| Physical Review Special… | 1 |
Author
| Anderson, Dan | 1 |
| Anderson, Paul S. | 1 |
| Bejar, Isaac I. | 1 |
| Ben-Shakhar, Gershon | 1 |
| Bennett, Randy Elliot | 1 |
| Downey, Ronald G. | 1 |
| Du Bose, Pansy | 1 |
| Harasym, P. H. | 1 |
| Hou, Xiaodong | 1 |
| Hyers, Albert D. | 1 |
| Kim, Sooyeon | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 8 |
| Reports - Research | 7 |
| Reports - Evaluative | 4 |
| Speeches/Meeting Papers | 3 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Secondary Education | 2 |
| High Schools | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 1 |
| Alabama High School… | 1 |
What Works Clearinghouse Rating
Slepkov, Aaron D.; Shiell, Ralph C. – Physical Review Special Topics - Physics Education Research, 2014
Constructed-response (CR) questions are a mainstay of introductory physics textbooks and exams. However, because of the time, cost, and scoring reliability constraints associated with this format, CR questions are being increasingly replaced by multiple-choice (MC) questions in formal exams. The integrated testlet (IT) is a recently developed…
Descriptors: Science Tests, Physics, Responses, Multiple Choice Tests
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Peer reviewedBennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewedHarasym, P. H.; And Others – Evaluation and the Health Professions, 1980
Coded, as opposed to free response items, in a multiple choice physiology test had a cueing effect which raised students' scores, especially for lower achievers. Reliability of coded items was also lower. Item format and scoring method had an effect on test results. (GDC)
Descriptors: Achievement Tests, Comparative Testing, Cues, Higher Education
Peer reviewedBen-Shakhar, Gershon; Sinai, Yakov – Journal of Educational Measurement, 1991
Gender differences in omitting items and guessing on multiple-choice tests were studied in Israel for 302 male and 302 female ninth graders and 150 male and 150 female university applicants. Females tended to omit more items and guess less often than did males. Implications for scoring are discussed. (SLD)
Descriptors: Aptitude Tests, Cognitive Ability, College Applicants, Comparative Testing
Peer reviewedDowney, Ronald G. – Applied Psychological Measurement, 1979
This research attempted to interrelate several methods of producing option weights (i.e., Guttman internal and external weights and judges' weights) and examined their effects on reliability and on concurrent, predictive, and face validity. It was concluded that option weighting offered limited, if any, improvement over unit weighting. (Author/CTM)
Descriptors: Achievement Tests, Answer Keys, Comparative Testing, High Schools
Du Bose, Pansy; Kromrey, Jeffrey D. – 1993
Empirical evidence is presented of the relative efficiency of two potential linkage plans to be used when equivalent test forms are being administered. Equating is a process by which scores on one form of a test are converted to scores on another form of the same test. A Monte Carlo study was conducted to examine equating stability and statistical…
Descriptors: Art Education, Comparative Testing, Computer Simulation, Equated Scores
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Steele, D. Joyce – 1991
This paper compares descriptive information based on analyses of the pilot and live administrations of the Alabama High School Graduation Examination (AHSGE). The AHSGE, a product of decisions made in 1977 and 1984 by the Alabama State Board of Education, is composed of subject tests in reading, mathematics, and language. The pass score for each…
Descriptors: Comparative Testing, Difficulty Level, Grade 11, Graduation Requirements
Bejar, Isaac I.; And Others – 1977
Information provided by typical and improved conventional classroom achievement tests was compared with information provided by an adaptive test covering the same subject matter. Both tests were administered to over 700 college students in a general biology course. Using the same scoring method, adaptive testing was found to yield substantially…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Biology

Direct link
