Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 45 |
Since 2016 (last 10 years) | 81 |
Since 2006 (last 20 years) | 97 |
Descriptor
Source
Grantee Submission | 97 |
Author
Publication Type
Reports - Research | 88 |
Speeches/Meeting Papers | 31 |
Journal Articles | 28 |
Tests/Questionnaires | 7 |
Reports - Descriptive | 5 |
Reports - Evaluative | 4 |
Information Analyses | 3 |
Education Level
Audience
Researchers | 1 |
Teachers | 1 |
Location
California | 3 |
Louisiana | 3 |
Texas | 3 |
Arizona (Phoenix) | 2 |
Pennsylvania | 2 |
Wisconsin | 2 |
Alabama | 1 |
California (Long Beach) | 1 |
Florida | 1 |
Idaho | 1 |
Illinois | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Does not meet standards | 1 |
Scott A. Crossley; Minkyung Kim; Quian Wan; Laura K. Allen; Rurik Tywoniw; Danielle S. McNamara – Grantee Submission, 2025
This study examines the potential to use non-expert, crowd-sourced raters to score essays by comparing expert raters' and crowd-sourced raters' assessments of writing quality. Expert raters and crowd-sourced raters scored 400 essays using a standardised holistic rubric and comparative judgement (pairwise ratings) scoring techniques, respectively.…
Descriptors: Writing Evaluation, Essays, Novices, Knowledge Level

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata

Benjamin Motz; Harmony Jankowski; Jennifer Lopatin; Waverly Tseng; Tamara Tate – Grantee Submission, 2024
Platform-enabled research services will control, manage, and measure learner experiences within that platform. In this paper, we consider the need for research services that examine learner experiences "outside" the platform. For example, we describe an effort to conduct an experiment on peer assessment in a college writing course, where…
Descriptors: Educational Technology, Learning Management Systems, Electronic Learning, Peer Evaluation
Cynthia Puranik; Molly Duncan; Ying Guo – Grantee Submission, 2024
In the present study we examined the contributions of transcription and foundational oral language skills to written composition outcomes in a sample of kindergartners. Two hundred and eighty-two kindergarten students from 49 classrooms participated in this study. Children's writing-related skills were examined using various tasks. Latent…
Descriptors: Oral Language, Language Skills, Writing Skills, Beginning Writing
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Kiana Hines; Carla Wood; Keisey Fumero – Grantee Submission, 2023
School-aged English Learners (ELs) are faced with the challenging task of acquiring a foreign language while simultaneously reading academically demanding literature. Therefore, the current research aimed to examine the relation between the rate of grammatical tense marking errors made by ELs and their performance on measures of reading…
Descriptors: English Language Learners, Grammar, Morphemes, Error Patterns
Oddis, Kyle; Burstein, Jill; McCaffrey, Daniel F.; Holtzman, Steven L. – Grantee Submission, 2022
Background: Researchers interested in quantitative measures of student "success" in writing cannot control completely for contextual factors which are local and site-based (i.e., in context of a specific instructor's writing classroom at a specific institution). (In)ability to control for curriculum in studies of student writing…
Descriptors: Writing Instruction, Writing Achievement, Curriculum Evaluation, College Instruction
Danielle S. McNamara; Panayiota Kendeou – Grantee Submission, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5 th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
McCaffrey, Daniel; Holtzman, Steven; Burstein, Jill; Beigman Klebanov, Beata – Grantee Submission, 2021
Low retention rates in college is a policy concern for US postsecondary institutions, and writing is a critical competency for college (Graham, 2019). This paper describes an exploratory writing analytics study at six 4-year universities aimed at gaining insights about the relationship between college retention and writing. Findings suggest that…
Descriptors: College Students, School Holding Power, Writing Ability, Writing Evaluation
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Grantee Submission, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research