Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Accuracy | 9 |
Essays | 6 |
Writing Evaluation | 6 |
High School Students | 5 |
Natural Language Processing | 5 |
Computational Linguistics | 4 |
Intelligent Tutoring Systems | 4 |
Scores | 4 |
Scoring | 4 |
Syntax | 3 |
Writing Instruction | 3 |
More ▼ |
Author
Publication Type
Reports - Research | 8 |
Speeches/Meeting Papers | 6 |
Journal Articles | 2 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
High Schools | 5 |
Secondary Education | 4 |
Higher Education | 3 |
Postsecondary Education | 3 |
Grade 10 | 2 |
Grade 11 | 1 |
Grade 9 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Arizona | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Allen, Laura K.; Mills, Caitlin; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2019
This study examines the extent to which instructions to self-explain vs. "other"-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146)…
Descriptors: Language Processing, Science Instruction, Computational Linguistics, Teaching Methods
Roscoe, Rod D.; Allen, Laura K.; Johnson, Adam C.; McNamara, Danielle S. – Grantee Submission, 2018
This study evaluates high school students' perceptions of automated writing feedback, and the influence of these perceptions on revising, as a function of varying modes of computer-based writing instruction. Findings indicate that students' perceptions of automated feedback accuracy, ease of use, relevance, and understandability were favorable.…
Descriptors: High School Students, Student Attitudes, Writing Evaluation, Feedback (Response)
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Am I Wrong or Am I Right? Gains in Monitoring Accuracy in an Intelligent Tutoring System for Writing
Allen, Laura K.; Crossley, Scott A.; Snow, Erica L.; Jacovina, Matthew E.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2015
We investigated whether students increased their self-assessment accuracy and essay scores over the course of an intervention with a writing strategy intelligent tutoring system, [Writing Pal] (W-Pal). Results indicate that students were able to learn from W-Pal, and that the combination of strategy instruction, game-based practice, and holistic…
Descriptors: Intelligent Tutoring Systems, Self Evaluation (Individuals), Accuracy, Essays
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Journal of Educational Data Mining, 2016
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…
Descriptors: Essays, Scoring, Writing Evaluation, Natural Language Processing
Crossley, Scott A.; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
The study applied the Multi-Dimensional analysis used by Biber (1988) to examine the functional parameters of essays. Co-occurrence patterns were identified within an essay corpus (n=1529) using a linguistic indices provided by Co-Metrix. These patterns were used to identify essay groups that shared features based upon situational parameters.…
Descriptors: Essays, Writing (Composition), Computational Linguistics, Cues