Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 25 |
Descriptor
Computer Assisted Testing | 26 |
Correlation | 26 |
Essays | 26 |
Scoring | 17 |
Writing Evaluation | 15 |
English (Second Language) | 13 |
Writing Tests | 12 |
Evaluators | 11 |
Second Language Learning | 11 |
Foreign Countries | 10 |
Scores | 10 |
More ▼ |
Source
Author
Attali, Yigal | 3 |
Breyer, F. Jay | 2 |
Coniam, David | 2 |
Davies, Phil | 2 |
Gentile, Claudia | 2 |
Kantor, Robert | 2 |
Kunnan, Antony John | 2 |
Lee, Yong-Won | 2 |
Ramineni, Chaitanya | 2 |
Sinharay, Sandip | 2 |
Williamson, David M. | 2 |
More ▼ |
Publication Type
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 6 |
Graduate Record Examinations | 2 |
SAT (College Admission Test) | 2 |
ACT Assessment | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Wang, Wei; Dorans, Neil J. – ETS Research Report Series, 2021
Agreement statistics and measures of prediction accuracy are often used to assess the quality of two measures of a construct. Agreement statistics are appropriate for measures that are supposed to be interchangeable, whereas prediction accuracy statistics are appropriate for situations where one variable is the target and the other variables are…
Descriptors: Classification, Scaling, Prediction, Accuracy
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Liu, Sha; Kunnan, Antony John – CALICO Journal, 2016
This study investigated the application of "WriteToLearn" on Chinese undergraduate English majors' essays in terms of its scoring ability and the accuracy of its error feedback. Participants were 163 second-year English majors from a university located in Sichuan province who wrote 326 essays from two writing prompts. Each paper was…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Forkner, Carl B. – ProQuest LLC, 2013
Compressed time from matriculation to graduation prevalent in executive graduate degree programs means that traditional methods of identifying deficiencies in student writing during the course of study may not provide timely remediation enabling for student success. This study examined a writing-intensive, 10-month executive graduate degree…
Descriptors: Writing Evaluation, Graduate Students, Computer Assisted Testing, Predictive Validity
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Liu, Ming; Li, Yi; Xu, Weiwei; Liu, Li – IEEE Transactions on Learning Technologies, 2017
Writing an essay is a very important skill for students to master, but a difficult task for them to overcome. It is particularly true for English as Second Language (ESL) students in China. It would be very useful if students could receive timely and effective feedback about their writing. Automatic essay feedback generation is a challenging task,…
Descriptors: Foreign Countries, College Students, Second Language Learning, English (Second Language)
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Previous Page | Next Page »
Pages: 1 | 2