Publication Date
In 2025 | 5 |
Since 2024 | 14 |
Since 2021 (last 5 years) | 70 |
Since 2016 (last 10 years) | 138 |
Since 2006 (last 20 years) | 228 |
Descriptor
Scoring | 450 |
Writing Evaluation | 450 |
Essays | 127 |
Writing Skills | 118 |
Student Evaluation | 87 |
Essay Tests | 86 |
Evaluation Methods | 82 |
Writing (Composition) | 80 |
Interrater Reliability | 79 |
Foreign Countries | 78 |
Computer Assisted Testing | 75 |
More ▼ |
Source
Author
McNamara, Danielle S. | 9 |
Allen, Laura K. | 7 |
Crossley, Scott A. | 7 |
Gearhart, Maryl | 7 |
White, Edward M. | 7 |
Attali, Yigal | 6 |
Wolfe, Edward W. | 6 |
Litman, Diane | 5 |
Mercer, Sterett H. | 5 |
Wilson, Joshua | 5 |
Baker, Eva L. | 4 |
More ▼ |
Publication Type
Education Level
Location
Canada | 12 |
California | 11 |
Turkey | 9 |
China | 8 |
Florida | 5 |
Iran | 5 |
United States | 5 |
Georgia | 4 |
New Mexico | 4 |
New Zealand | 4 |
Vermont | 4 |
More ▼ |
Laws, Policies, & Programs
Kentucky Education Reform Act… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Scott A. Crossley; Minkyung Kim; Quian Wan; Laura K. Allen; Rurik Tywoniw; Danielle S. McNamara – Grantee Submission, 2025
This study examines the potential to use non-expert, crowd-sourced raters to score essays by comparing expert raters' and crowd-sourced raters' assessments of writing quality. Expert raters and crowd-sourced raters scored 400 essays using a standardised holistic rubric and comparative judgement (pairwise ratings) scoring techniques, respectively.…
Descriptors: Writing Evaluation, Essays, Novices, Knowledge Level
Shermis, Mark D. – Journal of Educational Measurement, 2022
One of the challenges of discussing validity arguments for machine scoring of essays centers on the absence of a commonly held definition and theory of good writing. At best, the algorithms attempt to measure select attributes of writing and calibrate them against human ratings with the goal of accurate prediction of scores for new essays.…
Descriptors: Scoring, Essays, Validity, Writing Evaluation
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Meaghan McKenna; Hope Gerde; Nicolette Grasley-Boy – Reading and Writing: An Interdisciplinary Journal, 2025
This article describes the development and administration of the "Kindergarten-Second Grade (K-2) Writing Data-Based Decision Making (DBDM) Survey." The "K-2 Writing DBDM Survey" was developed to learn more about current DBDM practices specific to early writing. A total of 376 educational professionals (175 general education…
Descriptors: Writing Evaluation, Writing Instruction, Preschool Teachers, Kindergarten
Meghan Velez; Zackery Reed; Darryl Chamberlain; Cihan Aydiner – Thresholds in Education, 2025
In fewer than two years, generative artificial intelligence (GenAI) has transformed the educational experience for both students and faculty. Writing feedback and evaluation tools like MyEssayFeedback, EssayGrader, and Markr have been released with the promise that faculty will be able to focus more on teaching than simply grading. However, the…
Descriptors: Writing Across the Curriculum, Artificial Intelligence, Feedback (Response), Scores
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Xiong, Jiawei; Li, Feiming – Educational Measurement: Issues and Practice, 2023
Multidimensional scoring evaluates each constructed-response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two…
Descriptors: Scoring, Models, Task Analysis, Learning Processes
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Bolton, Tiffany; Stevenson, Brittney; Janes, William – Journal of Occupational Therapy, Schools & Early Intervention, 2023
Researchers utilized a cross-sectional secondary analysis of data within an ongoing non-randomized controlled trial study design to establish the reliability and internal consistency of a novel handwriting assessment for preschoolers, the Just Write! (JW), written by the authors. Seventy-eight children from an area preschool participated in the…
Descriptors: Handwriting, Writing Skills, Writing Evaluation, Preschool Children
Wang, Heqiao; Troia, Gary A. – Written Communication, 2023
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers--narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned…
Descriptors: Writing Evaluation, Essays, Elementary School Students, Grade 4
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays