Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 4 |
Writing Evaluation | 4 |
Automation | 2 |
College Entrance Examinations | 2 |
College Students | 2 |
Computer Software | 2 |
Essays | 2 |
Scoring | 2 |
Writing Skills | 2 |
Writing Tests | 2 |
Academic Persistence | 1 |
More ▼ |
Author
Burstein, Jill | 4 |
Attali, Yigal | 1 |
Beigman Klebanov, Beata | 1 |
Elliot, Norbert | 1 |
Ling, Guangming | 1 |
Madnani, Nitin | 1 |
McCaffrey, Dan | 1 |
Molloy, Hillary | 1 |
O'Reilly, Tenaha | 1 |
Sabatini, John | 1 |
Publication Type
Reports - Research | 3 |
Journal Articles | 2 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Education | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Location
District of Columbia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Burstein, Jill; Elliot, Norbert; Molloy, Hillary – CALICO Journal, 2016
Genre serves as a useful lens to investigate the range of evidence derived from automated writing evaluation (AWE). To support construct-relevant systems used for writing instruction and assessment, two investigations were conducted that focused on postsecondary writing requirements and faculty perceptions of student writing proficiency. Survey…
Descriptors: College Students, Writing Evaluation, Computer Assisted Testing, Writing Tests
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation