NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission13
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Wesley Morris; Scott Crossley; Langdon Holmes; Chaohua Ou; Danielle McNamara; Mihai Dascalu – Grantee Submission, 2023
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need arises to automatically provide formative feedback to written responses provided by students in response to readings. This study develops models to automatically provide feedback to student summaries written at the end of intelligent textbook sections.…
Descriptors: Textbooks, Electronic Publishing, Feedback (Response), Formative Evaluation
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Litman, Diane; Zhang, Haoran; Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine – Grantee Submission, 2021
Automated Essay Scoring (AES) can reliably grade essays at scale and reduce human effort in both classroom and commercial settings. There are currently three dominant supervised learning paradigms for building AES models: feature-based, neural, and hybrid. While feature-based models are more explainable, neural network models often outperform…
Descriptors: Essays, Writing Evaluation, Models, Accuracy
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Grantee Submission, 2021
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Alissa Patricia Wolters; Young-suk Grace Kim – Grantee Submission, 2023
We investigated spelling errors in English and Spanish essays by Spanish-English dual language learners in Grades 1, 2, and 3 (N = 278; 51% female) enrolled in either English immersion or English-Spanish dual immersion programs. We examined what types of spelling errors students made, whether they made spelling errors that could be due to…
Descriptors: Spelling, Spanish, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Hee-Sun; McNamara, Danielle; Bracey, Zoë Buck; Wilson, Christopher; Osborne, Jonathan; Haudek, Kevin C.; Liu, Ou Lydia; Pallant, Amy; Gerard, Libby; Linn, Marcia C.; Sherin, Bruce – Grantee Submission, 2019
Rapid advancements in computing have enabled automatic analyses of written texts created in educational settings. The purpose of this symposium is to survey several applications of computerized text analyses used in the research and development of productive learning environments. Four featured research projects have developed or been working on:…
Descriptors: Computational Linguistics, Written Language, Computer Assisted Testing, Scoring
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
The study applied the Multi-Dimensional analysis used by Biber (1988) to examine the functional parameters of essays. Co-occurrence patterns were identified within an essay corpus (n=1529) using a linguistic indices provided by Co-Metrix. These patterns were used to identify essay groups that shared features based upon situational parameters.…
Descriptors: Essays, Writing (Composition), Computational Linguistics, Cues