NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
DeCarlo, Lawrence T. – Educational Testing Service, 2010
A basic consideration in large-scale assessments that use constructed response (CR) items, such as essays, is how to allocate the essays to the raters that score them. Designs that are used in practice are incomplete, in that each essay is scored by only a subset of the raters, and also unbalanced, in that the number of essays scored by each rater…
Descriptors: Test Items, Responses, Essay Tests, Scoring
Kim, Sooyeon; Walker, Michael E. – Educational Testing Service, 2011
This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…
Descriptors: Test Format, Multiple Choice Tests, Test Items, Gender Differences
Sawaki, Yasuyo; Nissan, Susan – Educational Testing Service, 2009
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
Descriptors: Test Validity, Language Tests, English (Second Language), Computer Assisted Testing
Kostin, Irene – Educational Testing Service, 2004
The purpose of this study is to explore the relationship between a set of item characteristics and the difficulty of TOEFL[R] dialogue items. Identifying characteristics that are related to item difficulty has the potential to improve the efficiency of the item-writing process The study employed 365 TOEFL dialogue items, which were coded on 49…
Descriptors: Statistical Analysis, Difficulty Level, Language Tests, English (Second Language)