NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Rodeck, Elaine M.; Chin, Tzu-Yun; Davis, Susan L.; Plake, Barbara S. – Journal of Applied Testing Technology, 2008
This study examined the relationships between the evaluations obtained from standard setting panelists and changes in ratings between different rounds of a standard setting study that involved setting standards on different language versions of an exam. We investigated panelists' evaluations to determine if their perceptions of the standard…
Descriptors: Mathematics Tests, Standard Setting (Scoring), French, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Perie, Marianne; Johnson, Eugene – Journal of Applied Testing Technology, 2008
Psychometricians continue to introduce new approaches to setting cut scores for educational assessments in an attempt to improve on current methods. In this paper we describe the Item-Descriptor (ID) Matching method, a method based on IRT item mapping. In ID Matching, test content area experts match items (i.e., their judgments about the knowledge…
Descriptors: Test Results, Test Content, Testing Programs, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills