NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Roscoe, Rod D.; Allen, Laura K.; Johnson, Adam C.; McNamara, Danielle S. – Grantee Submission, 2018
This study evaluates high school students' perceptions of automated writing feedback, and the influence of these perceptions on revising, as a function of varying modes of computer-based writing instruction. Findings indicate that students' perceptions of automated feedback accuracy, ease of use, relevance, and understandability were favorable.…
Descriptors: High School Students, Student Attitudes, Writing Evaluation, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Journal of Educational Data Mining, 2016
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…
Descriptors: Essays, Scoring, Writing Evaluation, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
The study applied the Multi-Dimensional analysis used by Biber (1988) to examine the functional parameters of essays. Co-occurrence patterns were identified within an essay corpus (n=1529) using a linguistic indices provided by Co-Metrix. These patterns were used to identify essay groups that shared features based upon situational parameters.…
Descriptors: Essays, Writing (Composition), Computational Linguistics, Cues