NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCaffrey, Daniel; Holtzman, Steven; Burstein, Jill; Beigman Klebanov, Beata – Grantee Submission, 2021
Low retention rates in college is a policy concern for US postsecondary institutions, and writing is a critical competency for college (Graham, 2019). This paper describes an exploratory writing analytics study at six 4-year universities aimed at gaining insights about the relationship between college retention and writing. Findings suggest that…
Descriptors: College Students, School Holding Power, Writing Ability, Writing Evaluation
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beigman Klebanov, Beata; Priniski, Stacy; Burstein, Jill; Gyawali, Binod; Harackiewicz, Judith; Thoman, Dustin – Grantee Submission, 2018
Collection and analysis of students' writing samples on a large scale is a part of the research agenda of the emerging writing analytics community that promises to deliver an unprecedented insight into characteristics of student writing. Yet with a large scale often comes variability of contexts in which the samples were produced--different…
Descriptors: Learning Analytics, Context Effect, Automation, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Beigman Klebanov, Beata; Burstein, Jill; Harackiewicz, Judith M.; Priniski, Stacy J.; Mulholland, Matthew – International Journal of Artificial Intelligence in Education, 2017
The integration of subject matter learning with reading and writing skills takes place in multiple ways. Students learn to read, interpret, and write texts in the discipline-relevant genres. However, writing can be used not only for the purposes of practice in professional communication, but also as an opportunity to reflect on the learned…
Descriptors: STEM Education, Content Area Writing, Writing Instruction, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-raterĀ® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis