NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 42 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Andrew; Wilson, Joshua – Educational Technology Research and Development, 2021
Automated Writing Evaluation (AWE) provides automatic writing feedback and scoring to support student writing and revising. The purpose of the present study was to analyze a statewide implementation of an AWE software (n = 114,582) in grades 4-11. The goals of the study were to evaluate: (1) to what extent AWE features were used; (2) if equity and…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Scoring
Sterett H. Mercer; Joanna E. Cannon – Grantee Submission, 2022
We evaluated the validity of an automated approach to learning progress assessment (aLPA) for English written expression. Participants (n = 105) were students in Grades 2-12 who had parent-identified learning difficulties and received academic tutoring through a community-based organization. Participants completed narrative writing samples in the…
Descriptors: Elementary School Students, Secondary School Students, Learning Problems, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Behizadeh, Nadia; Lynch, Tom Liam – Berkeley Review of Education, 2017
For the last century, the quality of large-scale assessment in the United States has been undermined by narrow educational theory and hindered by limitations in technology. As a result, poor assessment practices have encouraged low-level instructional practices that disparately affect students from the most disadvantaged communities and schools.…
Descriptors: Equal Education, Measurement, Educational Theories, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Hixson, Nate; Rhudy, Vaughn – West Virginia Department of Education, 2012
To provide an opportunity for teachers to better understand the automated scoring process used by the state of West Virginia on our annual West Virginia Educational Standards Test 2 (WESTEST 2) Online Writing Assessment, the West Virginia Department of Education (WVDE) Office of Assessment and Accountability and the Office of Research conduct an…
Descriptors: Writing Tests, Computer Assisted Testing, Automation, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Previous Page | Next Page »
Pages: 1  |  2  |  3