NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon; Kolen, Michael J. – Applied Measurement in Education, 2006
Four item response theory linking methods (2 moment methods and 2 characteristic curve methods) were compared to concurrent (CO) calibration with the focus on the degree of robustness to format effects (FEs) when applying the methods to multidimensional data that reflected the FEs associated with mixed-format tests. Based on the quantification of…
Descriptors: Item Response Theory, Robustness (Statistics), Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Martinez, Michael E. – Applied Measurement in Education, 1993
Figural response (FR) items in architecture were compared with multiple-choice (MC) counterparts for their ability to predict architectural problem-solving proficiency of 33 practicing architects, 34 architecture interns, and 53 architecture students. Although both FR and MC predicted verbal design problem solving, only FR scores predicted…
Descriptors: Architects, Architectural Drafting, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Miller, Timothy R.; Hirsch, Thomas M. – Applied Measurement in Education, 1992
A procedure for interpreting multiple-discrimination indices from a multidimensional item-response theory analysis is described and demonstrated with responses of 1,635 high school students to a multiple-choice test. The procedure consists of converting discrimination parameter estimates to direction cosines and analyzing the angular distances…
Descriptors: Ability, Cluster Analysis, Comparative Analysis, Estimation (Mathematics)