NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry – Language Testing, 2016
Two factors were investigated that are thought to contribute to consistency in rater scoring judgments: rater training and experience in scoring. Also considered were the relative effects of scoring rubrics and exemplars on rater performance. Experienced teachers of English (N = 20) scored recorded responses from the TOEFL iBT speaking test prior…
Descriptors: Evaluators, Oral Language, Scores, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Shinhye; Winke, Paula – Language Testing, 2018
We investigated how young language learners process their responses on and perceive a computer-mediated, timed speaking test. Twenty 8-, 9-, and 10-year-old non-native English-speaking children (NNSs) and eight same-aged, native English-speaking children (NSs) completed seven computerized sample TOEFL® Primary™ speaking test tasks. We investigated…
Descriptors: Elementary School Students, Second Language Learning, Responses, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mann, Wolfgang; Roy, Penny; Morgan, Gary – Language Testing, 2016
This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…
Descriptors: Deafness, Language Skills, Vocabulary Development, American Sign Language
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Cotos, Elena; Lee, Jooyoung – Language Testing, 2015
Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). "Criterion"®, was developed by Educational Testing Service to analyze students' papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of…
Descriptors: Diagnostic Tests, Writing Evaluation, Automation, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Bax, Stephen – Language Testing, 2013
The research described in this article investigates test takers' cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The…
Descriptors: Reading Tests, Eye Movements, Cognitive Processes, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Min-Young – Language Testing, 2008
This paper concerns the divisibility of comprehension subskills measured in L2 listening and reading tests. Motivated by the administration of the new Web-based English as a Second Language Placement Exam (WB-ESLPE) at UCLA, this study addresses the following research questions: first, to what extent do the WB-ESLPE listening and reading items…
Descriptors: Structural Equation Models, Second Language Learning, Reading Tests, Inferences
Peer reviewed Peer reviewed
Choi, Inn-Chull; Kim, Kyoung Sung; Boo, Jaeyool – Language Testing, 2003
Utilizing the Test of English Proficiency, developed by Seoul National University (TEPS), examined comparability between the paper-based language test and the computer-based language test based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as…
Descriptors: Analysis of Variance, Comparative Analysis, Computational Linguistics, Computer Assisted Testing