NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Lunz, Mary E.; Stahl, John A. – 1990
Three examinations administered to medical students were analyzed to determine differences among severities of judges' assessments and among grading periods. The examinations included essay, clinical, and oral forms of the tests. Twelve judges graded the three essays for 32 examinees during a 4-day grading session, which was divided into eight…
Descriptors: Clinical Diagnosis, Comparative Testing, Difficulty Level, Essay Tests
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
PDF pending restoration PDF pending restoration
Anderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Cizek, Gregory J. – 1991
A commonly accepted rule for developing equated examinations using the common-items non-equivalent groups (CINEG) design is that items common to the two examinations being equated should be identical. The CINEG design calls for two groups of examinees to respond to a set of common items that is included in two examinations. In practice, this rule…
Descriptors: Certification, Comparative Testing, Difficulty Level, Higher Education
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms
Dowd, Steven B. – 1992
An alternative to multiple-choice (MC) testing is suggested as it pertains to the field of radiologic technology education. General principles for writing MC questions are given and contrasted with a new type of MC question, the alternate-choice (AC) question, in which the answer choices are embedded in the question in a short form that resembles…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Higher Education
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Peer reviewed Peer reviewed
Freedle, Roy; Kostin, Irene – Journal of Educational Measurement, 1990
The importance of item difficulty (equated delta) was explored as a predictor of differential item functioning of Black versus White examinees for 4 verbal item types using 13 Graduate Record Examination forms and 11 Scholastic Aptitude Test forms. Several significant racial differences were found. (TJH)
Descriptors: Black Students, College Bound Students, College Entrance Examinations, Comparative Testing
Peer reviewed Peer reviewed
Kent, Thomas H.; Albanese, Mark A. – Evaluation and the Health Professions, 1987
Two types of computer-administered unit quizzes in a systematic pathology course for second-year medical students were compared. Quizzes composed of questions selected on the basis of a student's ability had higher correlations with the final examination than did quizzes composed of questions randomly selected from topic areas. (Author/JAZ)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Vispoel, Walter P.; And Others – 1992
The effects of review options (the opportunity for examinees to review and change answers) on the magnitude, reliability, efficiency, and concurrent validity of scores obtained from three types of computerized vocabulary tests (fixed item, adaptive, and self-adapted) were studied. Subjects were 97 college students at a large midwestern university…
Descriptors: Adaptive Testing, College Students, Comparative Testing, Computer Assisted Testing
Pike, Gary – 1989
Responses to American College Test College Outcome Measures Program (ACT-COMP) items by 481 black and 9,237 white students at the University of Tennessee (Knoxville) were analyzed using F. Samejima's graded model to determine the level of differential item functioning (DIF). Students had been tested using Form 8 of the ACT-COMP objective test…
Descriptors: Black Students, College Entrance Examinations, College Students, Comparative Testing
PDF pending restoration PDF pending restoration
Hyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Previous Page | Next Page ยป
Pages: 1  |  2