Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 15 |
Descriptor
Automation | 16 |
Scoring | 14 |
English (Second Language) | 13 |
Language Tests | 13 |
Second Language Learning | 10 |
Essays | 7 |
Computer Assisted Testing | 5 |
Essay Tests | 5 |
Foreign Countries | 5 |
Test Scoring Machines | 5 |
Speech Tests | 4 |
More ▼ |
Source
Author
Attali, Yigal | 3 |
Blanchard, Daniel | 1 |
Burstein, Jill | 1 |
Buzick, Heather | 1 |
Casabianca, Jodi M. | 1 |
Chen Li | 1 |
Cheng, Liying | 1 |
Cheng, Yan | 1 |
Crossley, Scott A. | 1 |
Daniels, Paul | 1 |
Davis, Larry | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 13 |
Reports - Evaluative | 2 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Secondary Education | 3 |
High Schools | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
More ▼ |
Audience
Location
China | 2 |
California (Los Angeles) | 1 |
Canada | 1 |
Georgia | 1 |
Germany | 1 |
India | 1 |
Indiana | 1 |
Iowa | 1 |
Michigan | 1 |
Minnesota | 1 |
New York | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 16 |
Graduate Record Examinations | 2 |
Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Ikkyu Choi; Jiangang Hao; Chen Li; Michael Fauss; Jakub Novák – ETS Research Report Series, 2024
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human-in-the-loop and automated system to detect nonauthentic texts for a large-scale writing tests, and report its…
Descriptors: Writing Tests, Automation, Cheating, Plagiarism
Qian, Leyi; Zhao, Yali; Cheng, Yan – Journal of Educational Computing Research, 2020
Automated writing scoring can not only provide holistic scores but also instant and corrective feedback on L2 learners' writing quality. It has been increasing in use throughout China and internationally. Given the advantages, the past several years has witnessed the emergence and growth of writing evaluation products in China. To the best of our…
Descriptors: Foreign Countries, Automation, Scoring, Writing (Composition)
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Daniels, Paul – TESL-EJ, 2022
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's…
Descriptors: Speech Communication, Auditory Perception, Computer Uses in Education, Computer Assisted Testing
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Loukina, Anastassia; Buzick, Heather – ETS Research Report Series, 2017
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
Descriptors: Automation, Scoring, Language Tests, Speech Tests
Guo, Liang; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
This study explores whether linguistic features can predict second language writing proficiency in the Test of English as a Foreign Language (TOEFL iBT) integrated and independent writing tasks and, if so, whether there are differences and similarities in the two sets of predictive linguistic features. Linguistic features related to lexical…
Descriptors: English (Second Language), Linguistics, Second Language Learning, Writing Skills
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Weigle, Sara Cushing – ETS Research Report Series, 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the…
Descriptors: Scoring, English (Second Language), Second Language Instruction, Automation
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Ghosh, Siddhartha; Fatima, Sameen S. – Journal of Educational Technology, 2007
Automated essay grading or scoring systems are no more a myth, but they are a reality. As of today, the human written (not hand written) essays are corrected not only by examiners/teachers but also by machines. The TOEFL exam is one of the best examples of this application. The students' essays are evaluated both by human and web based automated…
Descriptors: Foreign Countries, Essays, Grading, Automation
Previous Page | Next Page »
Pages: 1 | 2