Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Comparative Analysis | 6 |
Multiple Choice Tests | 6 |
Correlation | 3 |
Item Response Theory | 3 |
Effect Size | 2 |
Test Items | 2 |
Ability | 1 |
Architects | 1 |
Architectural Drafting | 1 |
Cluster Analysis | 1 |
College Mathematics | 1 |
More ▼ |
Source
Applied Measurement in… | 6 |
Author
Ferrara, Steve | 1 |
Hirsch, Thomas M. | 1 |
Kim, Seonghoon | 1 |
Kingston, Neal M. | 1 |
Kolen, Michael J. | 1 |
Martinez, Michael E. | 1 |
Miller, Timothy R. | 1 |
Steedle, Jeffrey T. | 1 |
Suh, Youngsuk | 1 |
Talley, Anna E. | 1 |
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 3 |
Reports - Research | 3 |
Information Analyses | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Kim, Seonghoon; Kolen, Michael J. – Applied Measurement in Education, 2006
Four item response theory linking methods (2 moment methods and 2 characteristic curve methods) were compared to concurrent (CO) calibration with the focus on the degree of robustness to format effects (FEs) when applying the methods to multidimensional data that reflected the FEs associated with mixed-format tests. Based on the quantification of…
Descriptors: Item Response Theory, Robustness (Statistics), Test Format, Comparative Analysis

Martinez, Michael E. – Applied Measurement in Education, 1993
Figural response (FR) items in architecture were compared with multiple-choice (MC) counterparts for their ability to predict architectural problem-solving proficiency of 33 practicing architects, 34 architecture interns, and 53 architecture students. Although both FR and MC predicted verbal design problem solving, only FR scores predicted…
Descriptors: Architects, Architectural Drafting, College Students, Comparative Analysis

Miller, Timothy R.; Hirsch, Thomas M. – Applied Measurement in Education, 1992
A procedure for interpreting multiple-discrimination indices from a multidimensional item-response theory analysis is described and demonstrated with responses of 1,635 high school students to a multiple-choice test. The procedure consists of converting discrimination parameter estimates to direction cosines and analyzing the angular distances…
Descriptors: Ability, Cluster Analysis, Comparative Analysis, Estimation (Mathematics)