NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing66
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 46 to 60 of 66 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David – Language Testing, 2012
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Descriptors: Scoring, Classification, Weighted Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kokhan, Kateryna – Language Testing, 2012
The English Placement Test (EPT) at the University of Illinois at Urbana-Champaign (UIUC) is designed to provide an accurate placement (or exemption) of international students into the ESL writing and pronunciation classes. Over the last five years, UIUC has experienced an increase in the number of international students taking the EPT. Because of…
Descriptors: Student Placement, English (Second Language), Language Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bax, Stephen – Language Testing, 2013
The research described in this article investigates test takers' cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The…
Descriptors: Reading Tests, Eye Movements, Cognitive Processes, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Yeonsuk; Bridgeman, Brent – Language Testing, 2012
This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT[R]) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of…
Descriptors: Evidence, Academic Records, Graduate Students, Universities
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Weigle, Sara Cushing – Language Testing, 2010
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study approaches validity by comparing human and automated scores on responses to…
Descriptors: Correlation, Validity, Writing Ability, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Chung, Yoo-Ree; Hegelheimer, Volker; Pendar, Nick; Xu, Jing – Language Testing, 2010
This study piloted test items that will be used in a computer-delivered and scored test of productive grammatical ability in English as a second language (ESL). Findings from research on learners' development of morphosyntactic, syntactic, and functional knowledge were synthesized to create a framework of grammatical features. We outline the…
Descriptors: Test Items, Grammar, Developmental Stages, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Enright, Mary K.; Quinlan, Thomas – Language Testing, 2010
E-rater[R] is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT[R] writing…
Descriptors: Second Language Learning, Scoring, Essays, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Chung, Yoo-Ree – Language Testing, 2010
Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…
Descriptors: Test Items, Computational Linguistics, Testing, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alderson, J. Charles – Language Testing, 2009
In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…
Descriptors: Oral Language, Writing Tests, Listening Comprehension Tests, Test Reviews
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Stricker, Lawrence J.; Oranje, Andreas H. – Language Testing, 2009
This construct validation study investigated the factor structure of the Test of English as a Foreign Language[TM] Internet-based test (TOEFL[R] iBT). An item-level confirmatory factor analysis was conducted for a test form completed by participants in a field study. A higher-order factor model was identified, with a higher-order general factor…
Descriptors: Speech Communication, Construct Validity, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
Theunissen, T. J. J. M. – Language Testing, 1987
Describes ways that language test design can be computerized, and illustrates some test construction methods derived from the field of operations research. (CB)
Descriptors: Computer Assisted Testing, Language Tests, Operations Research, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Min-Young – Language Testing, 2008
This paper concerns the divisibility of comprehension subskills measured in L2 listening and reading tests. Motivated by the administration of the new Web-based English as a Second Language Placement Exam (WB-ESLPE) at UCLA, this study addresses the following research questions: first, to what extent do the WB-ESLPE listening and reading items…
Descriptors: Structural Equation Models, Second Language Learning, Reading Tests, Inferences
Peer reviewed Peer reviewed
Fulcher, Glenn – Language Testing, 2003
Describes a three-phase process model for interface design, drawing on practices developed in the software industry and adapting them for computer-based languages tests. Describes good practice in initial design, emphasizes the importance of usability testing, and argues that only through following a principled approach to interface design can the…
Descriptors: Computer Assisted Testing, Computer Software, Language Tests, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ockey, Gary J. – Language Testing, 2007
Over the past decade, listening comprehension tests have been converting to computer-based tests that include visual input. However, little research is available to suggest how test takers engage with different types of visuals on such tests. The present study compared a series of still images to video in academic computer-based tests to determine…
Descriptors: Listening Comprehension, Listening Comprehension Tests, Computer Assisted Testing, Native Speakers
Pages: 1  |  2  |  3  |  4  |  5