Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 18 |
Descriptor
Computer Assisted Testing | 18 |
Writing Evaluation | 18 |
Essays | 9 |
Middle School Students | 9 |
Scoring | 8 |
Scores | 7 |
Writing Achievement | 7 |
Automation | 6 |
Elementary School Students | 6 |
Grade 8 | 6 |
Writing Skills | 6 |
More ▼ |
Source
Author
Wilson, Joshua | 4 |
Deane, Paul | 2 |
Alexander, R. Curby | 1 |
Andrada, Gilbert N. | 1 |
Arseneault, Patrick | 1 |
Azzeddine Boudouaia | 1 |
Bennett, Randy E. | 1 |
Bridglall, Beatrice L. | 1 |
Burstein, Jill | 1 |
Carla Wood | 1 |
Chen, Dandan | 1 |
More ▼ |
Publication Type
Reports - Research | 13 |
Journal Articles | 11 |
Dissertations/Theses -… | 3 |
Collected Works - Proceedings | 1 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Middle Schools | 18 |
Elementary Education | 13 |
Junior High Schools | 13 |
Secondary Education | 13 |
Intermediate Grades | 7 |
Grade 8 | 6 |
Grade 5 | 5 |
Grade 6 | 4 |
Grade 7 | 4 |
High Schools | 4 |
Elementary Secondary Education | 3 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Shujun Liu; Azzeddine Boudouaia; Xinya Chen; Yan Li – Asia-Pacific Education Researcher, 2025
The application of Automated Writing Evaluation (AWE) has recently gained researchers' attention worldwide. However, the impact of AWE feedback on student writing, particularly in languages other than English, remains controversial. This study aimed to compare the impacts of Chinese AWE feedback and teacher feedback on Chinese writing revision,…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Writing Evaluation
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Gentry, Deborah J. – ProQuest LLC, 2022
The purpose of this ethnographic study was to examine the relationship between focused keyboarding instruction and the required standardized assessments mandated by Federal and State authorities. Initial mandates required that the first assessment to be administered online would be the Writing Assessment by year 2014. The assessment given in…
Descriptors: Ethnography, Keyboarding (Data Entry), Instruction, Standardized Tests
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Wilson, Joshua; Wen, Huijing – Elementary School Journal, 2022
This study investigated fourth and fifth graders' metacognitive knowledge about writing and its relationship to writing performance to help identify areas that might be leveraged when designing effective writing instruction. Students' metacognitive knowledge was probed using a 30-minute informative writing prompt requiring students to teach their…
Descriptors: Elementary School Students, Metacognition, Writing Attitudes, Writing (Composition)
Zhang, Mo; Bennett, Randy E.; Deane, Paul; van Rijn, Peter W. – Educational Measurement: Issues and Practice, 2019
This study compared gender groups on the processes used in writing essays in an online assessment. Middle-school students from four grades responded to essays in two persuasive subgenres, argumentation and policy recommendation. Writing processes were inferred from four indicators extracted from students' keystroke logs. In comparison to males, on…
Descriptors: Gender Differences, Essays, Computer Assisted Testing, Persuasive Discourse
Laurie, Robert; Bridglall, Beatrice L.; Arseneault, Patrick – SAGE Open, 2015
The effect of using a computer or paper and pencil on student writing scores on a provincial standardized writing assessment was studied. A sample of 302 francophone students wrote a short essay using a computer equipped with Microsoft Word with all of its correction functions enabled. One week later, the same students wrote a second short essay…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Achievement
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Harris, Connie – ProQuest LLC, 2013
Despite the efforts of a number of national organizations focused on improving writing literacy, there has been little improvement in student writing skills over the past decade to keep pace with the growing demands of the workplace. Based on constructivist learning theory and the belief that students become better writers through continuous…
Descriptors: Middle School Students, Grade 8, Writing Instruction, Writing Evaluation
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N. – Learning Disabilities: A Contemporary Journal, 2014
The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…
Descriptors: Writing Skills, Writing Ability, Feedback (Response), Computer Mediated Communication
Previous Page | Next Page »
Pages: 1 | 2