Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Computer Assisted Testing | 6 |
| English (Second Language) | 6 |
| Language Tests | 6 |
| Prompting | 6 |
| Second Language Learning | 6 |
| Scoring | 5 |
| Evaluators | 4 |
| Writing Tests | 4 |
| Computer Software | 3 |
| Correlation | 3 |
| Cues | 3 |
| More ▼ | |
Source
| ETS Research Report Series | 6 |
Author
| Lee, Yong-Won | 2 |
| Attali, Yigal | 1 |
| Bejar, Isaac I. | 1 |
| Blanchard, Daniel | 1 |
| Breland, Hunter | 1 |
| Bridgeman, Brent | 1 |
| Cahill, Aoife | 1 |
| Chodorow, Martin | 1 |
| Davey, Tim | 1 |
| Gentile, Claudia | 1 |
| Hemat, Ramin | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 6 |
| Tests/Questionnaires | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 6 |
What Works Clearinghouse Rating
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin – ETS Research Report Series, 2007
The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…
Descriptors: Role, Computer Assisted Testing, Language Proficiency, Oral Language
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – ETS Research Report Series, 2004
This study has investigated the comparability of computer-based testing (CBT) writing prompts in the Test of English as a Foreign Language™ (TOEFL®) for examinees of different native language backgrounds. A total of 81 writing prompts introduced from July 1998 through August 2000 were examined using a three-step logistic regression procedure for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing

Peer reviewed
