Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Automation | 3 |
Essays | 3 |
Scoring | 3 |
Writing Skills | 3 |
Correlation | 2 |
English (Second Language) | 2 |
Language Tests | 2 |
Scoring Rubrics | 2 |
Second Language Learning | 2 |
College Students | 1 |
Computer Assisted Testing | 1 |
More ▼ |
Author
Crossley, Scott A. | 1 |
Guo, Liang | 1 |
Higgins, Derrick | 1 |
McNamara, Danielle S. | 1 |
Quinlan, Thomas | 1 |
Weigle, Sara Cushing | 1 |
Wolff, Susanne | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 3 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Guo, Liang; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
This study explores whether linguistic features can predict second language writing proficiency in the Test of English as a Foreign Language (TOEFL iBT) integrated and independent writing tasks and, if so, whether there are differences and similarities in the two sets of predictive linguistic features. Linguistic features related to lexical…
Descriptors: English (Second Language), Linguistics, Second Language Learning, Writing Skills
Weigle, Sara Cushing – ETS Research Report Series, 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the…
Descriptors: Scoring, English (Second Language), Second Language Instruction, Automation
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests