NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Location
Germany1
Israel1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Hammann, Marcus; Phan, Thi Thanh Hoi; Ehmer, Maike; Grimm, Tobias – Journal of Biological Education, 2008
This study is concerned with different forms of assessment of pupils' skills in experimentation. The findings of three studies are reported. Study 1 investigates whether it is possible to develop reliable multiple-choice tests for the skills of forming hypotheses, designing experiments and analysing experimental data. Study 2 compares scores from…
Descriptors: Multiple Choice Tests, Experiments, Science Process Skills, Skill Analysis
Bottrill, John – J Psychol, 1969
Descriptors: Comparative Testing, Multiple Choice Tests, Rating Scales, Test Construction
Choppin, Bruce H. – Int Rev Educ, 1969
Descriptors: Comparative Testing, Essay Tests, International Education, Literature Appreciation
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Cizek, Gregory J. – 1991
A commonly accepted rule for developing equated examinations using the common-items non-equivalent groups (CINEG) design is that items common to the two examinations being equated should be identical. The CINEG design calls for two groups of examinees to respond to a set of common items that is included in two examinations. In practice, this rule…
Descriptors: Certification, Comparative Testing, Difficulty Level, Higher Education
Green, Kathy – 1978
Forty three-option multiple choice (MC) statements on a midterm examination were converted to 120 true-false (TF) statements, identical in content. Test forms (MC and TF) were randomly administered to 50 undergraduates, to investigate the validity and internal consistency reliability of the two forms. A Kuder-Richardson formula 20 reliability was…
Descriptors: Achievement Tests, Comparative Testing, Higher Education, Multiple Choice Tests
Wiggins, Grant – Executive Educator, 1994
Instead of relying on standardized test scores and interdistrict comparisons, school systems must develop a more powerful, timely, and local approach to accountability that is truly client-centered and focused on results. Accountability requires giving successful teachers the freedom and opportunity to take effective ideas beyond their own…
Descriptors: Accountability, Comparative Testing, Elementary Secondary Education, Feedback
Breland, Hunter M.; And Others – 1987
Six university English departments collaborated in this examination of the differences between multiple-choice and essay tests in evaluating writing skills. The study also investigated ways the two tools can complement one another, ways to improve cost effectiveness of essay testing, and ways to integrate assessment and the educational process.…
Descriptors: Comparative Testing, Efficiency, Essay Tests, Higher Education
Dowd, Steven B. – 1992
An alternative to multiple-choice (MC) testing is suggested as it pertains to the field of radiologic technology education. General principles for writing MC questions are given and contrasted with a new type of MC question, the alternate-choice (AC) question, in which the answer choices are embedded in the question in a short form that resembles…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Breland, Hunter M.; Gaynor, Judith L. – Journal of Educational Measurement, 1979
Over 2,000 writing samples were collected from four undergraduate institutions and compared, where possible, with scores on a multiple-choice test. High correlations between ratings of the writing samples and multiple-choice test scores were obtained. Samples contributed substantially to the prediction of both college grades and writing…
Descriptors: Achievement Tests, Comparative Testing, Correlation, Essay Tests
Peer reviewed Peer reviewed
Haladyna, Thomas A. – Applied Measurement in Education, 1992
Several multiple-choice item formats are examined in the current climate of test reform. The reform movement is discussed as it affects use of the following formats: (1) complex multiple-choice; (2) alternate choice; (3) true-false; (4) multiple true-false; and (5) the context dependent item set. (SLD)
Descriptors: Cognitive Psychology, Comparative Testing, Context Effect, Educational Change
Trevisan, Michael S.; Sax, Gilbert – 1991
The purpose of this study was to compare the reliabilities of two-, three-, four-, and five-choice tests using an incremental option paradigm. Test forms were created incrementally, a method approximating actual test construction procedures. Participants were 154 12th-grade students from the Portland (Oregon) area. A 45-item test with two options…
Descriptors: Comparative Testing, Distractors (Tests), Estimation (Mathematics), Grade 12
Previous Page | Next Page ยป
Pages: 1  |  2