Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Comparative Testing | 16 |
| Computer Assisted Testing | 16 |
| Multiple Choice Tests | 16 |
| Higher Education | 9 |
| Test Items | 7 |
| College Students | 6 |
| Test Format | 6 |
| Difficulty Level | 5 |
| Scores | 5 |
| Scoring | 5 |
| Test Validity | 4 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 3 |
| Bridgeman, Brent | 2 |
| Hyers, Albert D. | 2 |
| Albanese, Mark A. | 1 |
| Bennett, Randy Elliot | 1 |
| Boughton, Keith | 1 |
| Hou, Xiaodong | 1 |
| Jimmy de la Torre | 1 |
| Jinran Wu | 1 |
| Kanzler, Eileen M. | 1 |
| Kent, Thomas H. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 14 |
| Journal Articles | 13 |
| Speeches/Meeting Papers | 3 |
| Reports - Evaluative | 2 |
Education Level
| Higher Education | 3 |
| Elementary Secondary Education | 2 |
| Elementary Education | 1 |
| Grade 5 | 1 |
| High Schools | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
| Two Year Colleges | 1 |
Audience
| Researchers | 1 |
Location
| Canada | 1 |
| Maryland | 1 |
| Netherlands | 1 |
| Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 2 |
| Advanced Placement… | 1 |
What Works Clearinghouse Rating
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Park, Jooyong – British Journal of Educational Technology, 2010
The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…
Descriptors: Grade 5, Evaluation Methods, Multiple Choice Tests, Scores
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Peer reviewedBennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewedBridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Laird, Barbara B. – Inquiry, 2003
Laird studies the relationship between two computerized nursing tests and finds a relationship between the two sets of scores. (Contains 2 tables.)
Descriptors: Nursing Education, Nurses, Computer Assisted Testing, Comparative Testing
PDF pending restorationAnderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Peer reviewedKent, Thomas H.; Albanese, Mark A. – Evaluation and the Health Professions, 1987
Two types of computer-administered unit quizzes in a systematic pathology course for second-year medical students were compared. Quizzes composed of questions selected on the basis of a student's ability had higher correlations with the final examination than did quizzes composed of questions randomly selected from topic areas. (Author/JAZ)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewedvan den Bergh, Huub – Applied Psychological Measurement, 1990
In this study, 590 third graders from 12 Dutch schools took 32 tests indicating 16 semantic Structure-of-Intellect (SI) abilities and 1 of 4 reading comprehension tests, involving either multiple-choice or open-ended items. Results indicate that item type for reading comprehension is congeneric with respect to SI abilities measured. (TJH)
Descriptors: Comparative Testing, Computer Assisted Testing, Construct Validity, Elementary Education
Peer reviewedBridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing
Peer reviewedSkakun, Ernest N.; And Others – Educational and Psychological Measurement, 1979
Factor analysis was used to determine whether computerized patient management problems had the same factor structure as multiple choice examinations and rating scales. It was determined that the factor structure was similar to the examinations but not the rating scale. (JKS)
Descriptors: Comparative Testing, Computer Assisted Testing, Computer Programs, Factor Structure
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
