Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 11 |
Descriptor
Writing Evaluation | 12 |
Automation | 6 |
College Students | 5 |
Essays | 5 |
Computer Assisted Testing | 4 |
Natural Language Processing | 4 |
Writing Achievement | 4 |
Writing Instruction | 4 |
Scoring | 3 |
Writing Skills | 3 |
Academic Persistence | 2 |
More ▼ |
Source
Grantee Submission | 8 |
CALICO Journal | 1 |
Educational Testing Service | 1 |
International Journal of… | 1 |
Journal of Technology,… | 1 |
Author
Publication Type
Reports - Research | 10 |
Journal Articles | 5 |
Speeches/Meeting Papers | 5 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 9 |
Postsecondary Education | 9 |
Elementary Education | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Oddis, Kyle; Burstein, Jill; McCaffrey, Daniel F.; Holtzman, Steven L. – Grantee Submission, 2022
Background: Researchers interested in quantitative measures of student "success" in writing cannot control completely for contextual factors which are local and site-based (i.e., in context of a specific instructor's writing classroom at a specific institution). (In)ability to control for curriculum in studies of student writing…
Descriptors: Writing Instruction, Writing Achievement, Curriculum Evaluation, College Instruction
McCaffrey, Daniel; Holtzman, Steven; Burstein, Jill; Beigman Klebanov, Beata – Grantee Submission, 2021
Low retention rates in college is a policy concern for US postsecondary institutions, and writing is a critical competency for college (Graham, 2019). This paper describes an exploratory writing analytics study at six 4-year universities aimed at gaining insights about the relationship between college retention and writing. Findings suggest that…
Descriptors: College Students, School Holding Power, Writing Ability, Writing Evaluation
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Burstein, Jill; McCaffrey, Daniel; Elliot, Norbert; Beigman Klebanov, Beata – Grantee Submission, 2020
Writing achievement is a complex skill set as characterized by the sociocognitive writing framework, including writing domain knowledge (e.g., sentence structure), general cognitive skills (e.g., critical thinking) and intra- (e.g., interest) and interpersonal (e.g., collaboration) subfactors. During students' postsecondary careers, they need to…
Descriptors: Writing Achievement, Postsecondary Education, College Students, Writing Instruction
Burstein, Jill; McCaffrey, Daniel; Beigman Klebanov, Beata; Ling, Guangming; Holtzman, Steven – Grantee Submission, 2019
Writing is a challenge and a potential obstacle for students in U.S. 4-year postsecondary institutions lacking prerequisite writing skills. This study aims to address the research question: Is there a relationship between specific features (analytics) in coursework writing and broader success predictors? Knowledge about this relationship could…
Descriptors: Undergraduate Students, Writing (Composition), Writing Evaluation, Learning Analytics
Beigman Klebanov, Beata; Priniski, Stacy; Burstein, Jill; Gyawali, Binod; Harackiewicz, Judith; Thoman, Dustin – Grantee Submission, 2018
Collection and analysis of students' writing samples on a large scale is a part of the research agenda of the emerging writing analytics community that promises to deliver an unprecedented insight into characteristics of student writing. Yet with a large scale often comes variability of contexts in which the samples were produced--different…
Descriptors: Learning Analytics, Context Effect, Automation, Generalization
Beigman Klebanov, Beata; Burstein, Jill; Harackiewicz, Judith M.; Priniski, Stacy J.; Mulholland, Matthew – International Journal of Artificial Intelligence in Education, 2017
The integration of subject matter learning with reading and writing skills takes place in multiple ways. Students learn to read, interpret, and write texts in the discipline-relevant genres. However, writing can be used not only for the purposes of practice in professional communication, but also as an opportunity to reflect on the learned…
Descriptors: STEM Education, Content Area Writing, Writing Instruction, Intervention
Burstein, Jill; Elliot, Norbert; Molloy, Hillary – CALICO Journal, 2016
Genre serves as a useful lens to investigate the range of evidence derived from automated writing evaluation (AWE). To support construct-relevant systems used for writing instruction and assessment, two investigations were conducted that focused on postsecondary writing requirements and faculty perceptions of student writing proficiency. Survey…
Descriptors: College Students, Writing Evaluation, Computer Assisted Testing, Writing Tests
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Chodorow, Martin; Burstein, Jill – Educational Testing Service, 2004
This study examines the relation between essay length and holistic scores assigned to Test of English as a Foreign Language[TM] (TOEFL[R]) essays by e-rater[R], the automated essay scoring system developed by ETS. Results show that an early version of the system, e-rater99, accounted for little variance in human reader scores beyond that which…
Descriptors: Essays, Test Scoring Machines, English (Second Language), Student Evaluation
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation