Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Assisted Testing | 3 |
Scoring | 3 |
Standardized Tests | 3 |
Essay Tests | 2 |
Essays | 2 |
Writing Tests | 2 |
Abstract Reasoning | 1 |
Automation | 1 |
College English | 1 |
College Entrance Examinations | 1 |
College Faculty | 1 |
More ▼ |
Author
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Brown, Kevin | 1 |
Higgins, Derrick | 1 |
Quinlan, Thomas | 1 |
Trapani, Catherine | 1 |
Wolff, Susanne | 1 |
Publication Type
Reports - Evaluative | 3 |
Journal Articles | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Test of English as a Foreign… | 2 |
Praxis Series | 1 |
What Works Clearinghouse Rating
Brown, Kevin – CEA Forum, 2015
In this article, the author describes his project to take every standardized exam English majors students take. During the summer and fall semesters of 2012, the author signed up for and took the GRE General Test, the Praxis Content Area Exam (English Language, Literature, and Composition: Content Knowledge), the Senior Major Field Tests in…
Descriptors: College Faculty, College English, Test Preparation, Standardized Tests
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests