NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong – British Journal of Educational Technology, 2010
The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…
Descriptors: Grade 5, Evaluation Methods, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewed Peer reviewed
Bridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laird, Barbara B. – Inquiry, 2003
Laird studies the relationship between two computerized nursing tests and finds a relationship between the two sets of scores. (Contains 2 tables.)
Descriptors: Nursing Education, Nurses, Computer Assisted Testing, Comparative Testing
PDF pending restoration PDF pending restoration
Anderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Peer reviewed Peer reviewed
Kent, Thomas H.; Albanese, Mark A. – Evaluation and the Health Professions, 1987
Two types of computer-administered unit quizzes in a systematic pathology course for second-year medical students were compared. Quizzes composed of questions selected on the basis of a student's ability had higher correlations with the final examination than did quizzes composed of questions randomly selected from topic areas. (Author/JAZ)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
van den Bergh, Huub – Applied Psychological Measurement, 1990
In this study, 590 third graders from 12 Dutch schools took 32 tests indicating 16 semantic Structure-of-Intellect (SI) abilities and 1 of 4 reading comprehension tests, involving either multiple-choice or open-ended items. Results indicate that item type for reading comprehension is congeneric with respect to SI abilities measured. (TJH)
Descriptors: Comparative Testing, Computer Assisted Testing, Construct Validity, Elementary Education
Peer reviewed Peer reviewed
Bridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
PDF pending restoration PDF pending restoration
Hyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Skakun, Ernest N.; And Others – Educational and Psychological Measurement, 1979
Factor analysis was used to determine whether computerized patient management problems had the same factor structure as multiple choice examinations and rating scales. It was determined that the factor structure was similar to the examinations but not the rating scale. (JKS)
Descriptors: Comparative Testing, Computer Assisted Testing, Computer Programs, Factor Structure
Previous Page | Next Page ยป
Pages: 1  |  2