NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Salmani Nodoushan, Mohammad Ali – Online Submission, 2021
It has been argued in the literature on (language) testing that any act of testing/assessment can impact: (1) educators' curriculum design; (2) teachers' teaching practices; and (3) students' learning behaviors. This quality of any given testing situation or act of assessment has been called washback, or backwash if you will. Washback falls into…
Descriptors: Testing Problems, Language Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Garci´a Laborda, Jesus; Ferna´ndez A´lvarez, Miguel – Language Learning & Technology, 2021
This paper compares and analyzes a selection of popular multilevel tests used for quick accreditation of English as a foreign language worldwide. The paper begins by stating the current need of accreditation of English language competence for both academic and professional matters. It then looks at their defining features and differences. After,…
Descriptors: Comparative Analysis, Second Language Learning, Second Language Instruction, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Xiaofei – Language Testing, 2017
Research investigating corpora of English learners' language raises new questions about how syntactic complexity is defined theoretically and operationally for second language (L2) writing assessment. I show that syntactic complexity is important in construct definitions and L2 writing rating scales as well as in L2 writing research. I describe…
Descriptors: Syntax, Computational Linguistics, Second Language Learning, Writing Research
Peer reviewed Peer reviewed
Direct linkDirect link
Römer, Ute – Language Testing, 2017
This paper aims to connect recent corpus research on phraseology with current language testing practice. It discusses how corpora and corpus-analytic techniques can illuminate central aspects of speech and help in conceptualizing the notion of lexicogrammar in second language speaking assessment. The description of speech and some of its core…
Descriptors: Language Tests, Grammar, English (Second Language), Second Language Learning
Garcia Laborda, Jesus; Gonzalez Such, Jose; Alvarez Alvarez, Alfredo – Online Submission, 2015
Testing is an issue of increasing importance. While for many teachers language learning should be communicative; in fact, they expect their students to provide evidence of their knowledge. Thus, there is a clear mismatch on the approach to language teaching and language testing. As a consequence, there is an evident need change the testing…
Descriptors: Teacher Attitudes, Educational Testing, Foreign Countries, Grade 12
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Kathryn; McNamara, Tim – Measurement: Interdisciplinary Research and Perspectives, 2015
Those who work in second- and foreign-language testing often find Koretz's concern for validity inferences under high-stakes (VIHS) conditions both welcome and familiar. While the focus of the article is more narrowly on the potential for two instructional responses to test-based accountability, "reallocation" and "coaching,"…
Descriptors: Language Tests, Test Validity, High Stakes Tests, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, Tim; Knoch, Ute – Language Testing, 2012
This paper examines the uptake of Rasch measurement in language testing through a consideration of research published in language testing research journals in the period 1984 to 2009. Following the publication of the first papers on this topic, exploring the potential of the simple Rasch model for the analysis of dichotomous language test data, a…
Descriptors: Language Tests, Testing, English (Second Language), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David – Language Testing, 2012
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Descriptors: Scoring, Classification, Weighted Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Alexander C.; Volkwein, J. Fredericks – New Directions for Institutional Research, 2010
After surveying 1,827 students in their final year at eighty randomly selected two-year and four-year public and private institutions, American Institutes for Research (2006) reported that approximately 30 percent of students in two-year institutions and nearly 20 percent of students in four-year institutions have only basic quantitative…
Descriptors: Standardized Tests, Basic Skills, College Admission, Educational Testing
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Previous Page | Next Page »
Pages: 1  |  2