Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 29 |
Descriptor
Computer Assisted Testing | 29 |
Correlation | 29 |
Writing Evaluation | 29 |
Essays | 15 |
Second Language Learning | 14 |
Writing Tests | 14 |
English (Second Language) | 13 |
Scores | 13 |
Scoring | 13 |
Comparative Analysis | 9 |
Evaluators | 9 |
More ▼ |
Source
Author
Attali, Yigal | 2 |
Barkaoui, Khaled | 2 |
Breyer, F. Jay | 2 |
Bridgeman, Brent | 2 |
Kunnan, Antony John | 2 |
Lee, Yong-Won | 2 |
Ramineni, Chaitanya | 2 |
Williamson, David M. | 2 |
Allen, Nancy | 1 |
Ardison, Sharon | 1 |
Beigman Klebanov, Beata | 1 |
More ▼ |
Publication Type
Journal Articles | 25 |
Reports - Research | 21 |
Reports - Evaluative | 4 |
Dissertations/Theses -… | 2 |
Tests/Questionnaires | 2 |
Collected Works - Proceedings | 1 |
Information Analyses | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 13 |
Postsecondary Education | 10 |
Elementary Education | 5 |
Secondary Education | 4 |
Elementary Secondary Education | 3 |
Grade 8 | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Grade 6 | 2 |
Grade 11 | 1 |
Grade 4 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 7 |
Graduate Record Examinations | 2 |
SAT (College Admission Test) | 2 |
ACT Assessment | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Lu, Xiaofei – Language Testing, 2017
Research investigating corpora of English learners' language raises new questions about how syntactic complexity is defined theoretically and operationally for second language (L2) writing assessment. I show that syntactic complexity is important in construct definitions and L2 writing rating scales as well as in L2 writing research. I describe…
Descriptors: Syntax, Computational Linguistics, Second Language Learning, Writing Research
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Liu, Sha; Kunnan, Antony John – CALICO Journal, 2016
This study investigated the application of "WriteToLearn" on Chinese undergraduate English majors' essays in terms of its scoring ability and the accuracy of its error feedback. Participants were 163 second-year English majors from a university located in Sichuan province who wrote 326 essays from two writing prompts. Each paper was…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Forkner, Carl B. – ProQuest LLC, 2013
Compressed time from matriculation to graduation prevalent in executive graduate degree programs means that traditional methods of identifying deficiencies in student writing during the course of study may not provide timely remediation enabling for student success. This study examined a writing-intensive, 10-month executive graduate degree…
Descriptors: Writing Evaluation, Graduate Students, Computer Assisted Testing, Predictive Validity
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Zou, Xiao-Ling; Chen, Yan-Min – Technology, Pedagogy and Education, 2016
The effects of computer and paper test media on EFL test-takers with different computer familiarity in writing scores and in the cognitive writing process have been comprehensively explored from the learners' aspect as well as on the basis of related theories and practice. The results indicate significant differences in test scores among the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Test Format
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Liu, Ming; Li, Yi; Xu, Weiwei; Liu, Li – IEEE Transactions on Learning Technologies, 2017
Writing an essay is a very important skill for students to master, but a difficult task for them to overcome. It is particularly true for English as Second Language (ESL) students in China. It would be very useful if students could receive timely and effective feedback about their writing. Automatic essay feedback generation is a challenging task,…
Descriptors: Foreign Countries, College Students, Second Language Learning, English (Second Language)
Previous Page | Next Page »
Pages: 1 | 2