Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 11 |
Descriptor
Source
Grantee Submission | 11 |
Author
Publication Type
Reports - Research | 11 |
Speeches/Meeting Papers | 7 |
Journal Articles | 2 |
Education Level
Elementary Education | 4 |
Secondary Education | 4 |
High Schools | 3 |
Intermediate Grades | 2 |
Middle Schools | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 9 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
More ▼ |
Audience
Location
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Sterett H. Mercer; Joanna E. Cannon – Grantee Submission, 2022
We evaluated the validity of an automated approach to learning progress assessment (aLPA) for English written expression. Participants (n = 105) were students in Grades 2-12 who had parent-identified learning difficulties and received academic tutoring through a community-based organization. Participants completed narrative writing samples in the…
Descriptors: Elementary School Students, Secondary School Students, Learning Problems, Learning Disabilities
Lee, Hee-Sun; McNamara, Danielle; Bracey, Zoë Buck; Wilson, Christopher; Osborne, Jonathan; Haudek, Kevin C.; Liu, Ou Lydia; Pallant, Amy; Gerard, Libby; Linn, Marcia C.; Sherin, Bruce – Grantee Submission, 2019
Rapid advancements in computing have enabled automatic analyses of written texts created in educational settings. The purpose of this symposium is to survey several applications of computerized text analyses used in the research and development of productive learning environments. Four featured research projects have developed or been working on:…
Descriptors: Computational Linguistics, Written Language, Computer Assisted Testing, Scoring
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2014
This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative…
Descriptors: Essays, Computer Assisted Testing, Writing Evaluation, Scores
Roscoe, Rod D.; Varner, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
Various computer tools have been developed to support educators' assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students' writing proficiency does not necessarily…
Descriptors: Writing Instruction, Intelligent Tutoring Systems, Computer Assisted Testing, Writing Evaluation