Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Essays | 8 |
Scoring | 5 |
Writing Evaluation | 5 |
Test Scoring Machines | 4 |
Automation | 3 |
Student Evaluation | 3 |
Academic Persistence | 2 |
Biology | 2 |
College Entrance Examinations | 2 |
College Students | 2 |
Computer Assisted Testing | 2 |
More ▼ |
Source
Grantee Submission | 2 |
Educational Testing Service | 1 |
International Journal of… | 1 |
Journal of Technology,… | 1 |
Author
Burstein, Jill | 8 |
Beigman Klebanov, Beata | 3 |
Chodorow, Martin | 2 |
Lu, Chi | 2 |
Wolff, Susanne | 2 |
Attali, Yigal | 1 |
Gyawali, Binod | 1 |
Harackiewicz, Judith | 1 |
Harackiewicz, Judith M. | 1 |
Kukich, Karen | 1 |
Ling, Guangming | 1 |
More ▼ |
Publication Type
Journal Articles | 3 |
Reports - Evaluative | 3 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Beigman Klebanov, Beata; Priniski, Stacy; Burstein, Jill; Gyawali, Binod; Harackiewicz, Judith; Thoman, Dustin – Grantee Submission, 2018
Collection and analysis of students' writing samples on a large scale is a part of the research agenda of the emerging writing analytics community that promises to deliver an unprecedented insight into characteristics of student writing. Yet with a large scale often comes variability of contexts in which the samples were produced--different…
Descriptors: Learning Analytics, Context Effect, Automation, Generalization
Beigman Klebanov, Beata; Burstein, Jill; Harackiewicz, Judith M.; Priniski, Stacy J.; Mulholland, Matthew – International Journal of Artificial Intelligence in Education, 2017
The integration of subject matter learning with reading and writing skills takes place in multiple ways. Students learn to read, interpret, and write texts in the discipline-relevant genres. However, writing can be used not only for the purposes of practice in professional communication, but also as an opportunity to reflect on the learned…
Descriptors: STEM Education, Content Area Writing, Writing Instruction, Intervention
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Burstein, Jill; Marcu, Daniel – 2000
"E-rater" is an operational automated essay scoring application that combines several natural language processing (NLP) tools for the purpose of identifying linguistic features in essay responses to assess the quality of the text. The application currently identifies a variety of syntactic, discourse, and topical analysis features. Two…
Descriptors: Essays, Scoring, Student Evaluation, Test Scoring Machines
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Burstein, Jill; Wolff, Susanne; Lu, Chi – 2001
The research described in this paper shows the use of lexical semantic techniques for automated scoring of short-answer and essay responses from performance-based test items. Researchers used lexical semantic techniques in order to identify the meaningful content of free-text responses for small data sets. One data set involved 172 training…
Descriptors: Essays, Performance Based Assessment, Scoring, Test Items
Burstein, Jill; Kukich, Karen; Wolff, Susanne; Lu, Chi; Chodorow, Martin – 2001
Electronic Essay Rater (e-rater) is a prototype automated essay scoring system built at Educational Testing Service that uses discourse marking in addition to syntactic information and topical content vector analyses to assign essay scores automatically. This paper gives a general description of e-rater as a whole, but its emphasis is on the…
Descriptors: College Students, Essays, Higher Education, Scoring
Chodorow, Martin; Burstein, Jill – Educational Testing Service, 2004
This study examines the relation between essay length and holistic scores assigned to Test of English as a Foreign Language[TM] (TOEFL[R]) essays by e-rater[R], the automated essay scoring system developed by ETS. Results show that an early version of the system, e-rater99, accounted for little variance in human reader scores beyond that which…
Descriptors: Essays, Test Scoring Machines, English (Second Language), Student Evaluation