Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Software | 5 |
English (Second Language) | 5 |
Language Tests | 5 |
Second Language Learning | 5 |
Writing Tests | 5 |
Correlation | 4 |
Essays | 4 |
Computer Assisted Testing | 3 |
Scoring | 3 |
Writing Evaluation | 3 |
Evaluators | 2 |
More ▼ |
Author
Gentile, Claudia | 2 |
Kantor, Robert | 2 |
Lee, Yong-Won | 2 |
Bridgeman, Brent | 1 |
Crossley, Scott | 1 |
Davey, Tim | 1 |
Lee, Elizabeth | 1 |
Ramineni, Chaitanya | 1 |
Trapani, Catherine S. | 1 |
Tywoniw, Rurik | 1 |
Williamson, David M. | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 5 |
What Works Clearinghouse Rating
Lee, Elizabeth – English Teaching, 2023
In many high-stakes testing situations, test-takers are not allowed to draw on external writing resources while writing, a practice observed more frequently in classroom settings. This may pose problems with the representativeness of test tasks and score interpretations. This study investigates the domain definition of one particular test known as…
Descriptors: Writing Instruction, English (Second Language), Second Language Learning, Second Language Instruction
Tywoniw, Rurik; Crossley, Scott – Language Education & Assessment, 2019
Cohesion features were calculated for a corpus of 960 essays by 480 test-takers from the Test of English as a Foreign Language (TOEFL) in order to examine differences in the use of cohesion devices between integrated (source-based) writing and independent writing samples. Cohesion indices were measured using an automated textual analysis tool, the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Connected Discourse
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring