Publication Date
In 2025 | 3 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 27 |
Since 2016 (last 10 years) | 62 |
Since 2006 (last 20 years) | 87 |
Descriptor
Source
Grantee Submission | 87 |
Author
Publication Type
Reports - Research | 83 |
Speeches/Meeting Papers | 34 |
Journal Articles | 25 |
Tests/Questionnaires | 10 |
Reports - Descriptive | 3 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 1 |
Teachers | 1 |
Location
California | 4 |
Arizona (Phoenix) | 3 |
Louisiana | 2 |
Arizona | 1 |
California (Long Beach) | 1 |
Germany | 1 |
Michigan | 1 |
Mississippi | 1 |
Missouri (Saint Louis) | 1 |
Texas | 1 |
Wisconsin (Madison) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 10 |
SAT (College Admission Test) | 3 |
Test of English as a Foreign… | 2 |
Writing Apprehension Test | 2 |
ACT Assessment | 1 |
National Assessment of… | 1 |
Wechsler Individual… | 1 |
What Works Clearinghouse Rating
Danielle S. McNamara; Micah Watanabe; Linh Huynh; Kathryn S. McCarthy; Larua K. Allen; Joseph P. Magliano – Grantee Submission, 2023
Writing an integrated essay based on multiple-documents requires students to both comprehend the documents and integrate the documents into a coherent essay. In the current study, we examined the effects of summarization as a potential reading strategy to enhance participants' multiple-document comprehension and integrated essay writing.…
Descriptors: Reading Strategies, Reading Comprehension, Essays, Scores
Scott A. Crossley; Minkyung Kim; Quian Wan; Laura K. Allen; Rurik Tywoniw; Danielle S. McNamara – Grantee Submission, 2025
This study examines the potential to use non-expert, crowd-sourced raters to score essays by comparing expert raters' and crowd-sourced raters' assessments of writing quality. Expert raters and crowd-sourced raters scored 400 essays using a standardised holistic rubric and comparative judgement (pairwise ratings) scoring techniques, respectively.…
Descriptors: Writing Evaluation, Essays, Novices, Knowledge Level
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Anna E. Mason; Jason L. G. Braasch; Daphne Greenberg; Erica D. Kessler; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
This study examined the extent to which prior beliefs and reading instructions impacted elements of a reader's mental representation of multiple texts. College students' beliefs about childhood vaccinations were assessed before reading two anti-vaccine and two pro-vaccine texts. Participants in the experimental condition read for the purpose of…
Descriptors: College Students, Beliefs, Immunization Programs, Vocabulary
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – Grantee Submission, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Grantee Submission, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Kathryn S. McCarthy; Eleanor F. Yan; Laura K. Allen; Allison N. Sonia; Joseph P. Magliano; Danielle S. McNamara – Grantee Submission, 2022
Few studies have explored how general skills in both reading and writing influence performance on integrated, source-based writing. The goal of the present study was to consider the relative contributions of reading and writing ability on multiple-document integrative reading and writing tasks. Students in the U.S. (n=94) completed two tasks in…
Descriptors: Individual Differences, Reading Skills, Writing Skills, Reading Strategies
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Zhang, Haoran; Litman, Diane – Grantee Submission, 2018
This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of…
Descriptors: Essays, Scoring, Automation, Artificial Intelligence
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Michael W. Asher; Judith M. Harackiewicz; Patrick N. Beymer; Cameron A. Hecht; Liana B. Lamont; Nicole M. Else-Quest; Stacy J. Priniski; Dustin B. Thoman; Janet S. Hyde; Jessi L. Smith – Grantee Submission, 2023
We tested the long-term effects of a utility-value intervention administered in a gateway chemistry course, with the goal of promoting persistence and diversity in STEM. In a randomized controlled trial (N = 2,505), students wrote three essays about course content and its personal relevance or three control essays. The intervention significantly…
Descriptors: Intervention, Academic Persistence, Diversity, STEM Education
Puranik, Cynthia; Duncan, Molly; Li, Hongli; Guo, Ying – Grantee Submission, 2020
Despite increasing pressure for children to learn to write at younger ages, there are many unanswered questions about composition skills in early elementary school. The goal of this research was to examine the dimensionality of composition skills in kindergarten children, thereby adding to current knowledge about the measurement of young…
Descriptors: Kindergarten, Young Children, Writing (Composition), Writing Skills