Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 11 |
Descriptor
Computer Assisted Testing | 12 |
Correlation | 12 |
Essay Tests | 12 |
Scoring | 9 |
Scores | 8 |
Writing Evaluation | 6 |
Statistical Analysis | 5 |
Prompting | 4 |
Regression (Statistics) | 4 |
Writing Tests | 4 |
Accuracy | 3 |
More ▼ |
Source
ETS Research Report Series | 5 |
Educational Assessment | 1 |
International Journal of… | 1 |
Journal of Applied Testing… | 1 |
Journal of Educational… | 1 |
Journal of Effective Teaching | 1 |
Journal of Statistics… | 1 |
ReCALL | 1 |
Author
Wolfe, Edward W. | 2 |
Attali, Yigal | 1 |
Bejar, Isaac I. | 1 |
Belur, Vinetha | 1 |
Breyer, F. Jay | 1 |
Bridgeman, Brent | 1 |
Chen, Jing | 1 |
Clariana, Roy | 1 |
Clariana, Roy B. | 1 |
Coniam, David | 1 |
Deane, Paul | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 10 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 5 |
Secondary Education | 4 |
Elementary Secondary Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Hong Kong | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
Graduate Record Examinations | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Glew, David; Meyer, Tracy; Sawyer, Becky; Schuhmann, Pete; Wray, Barry – Journal of Effective Teaching, 2011
Business schools are often criticized for the inadequate writing skills of their graduates. Improving writing skills involves first understanding the current skill level of students. This research attempts to provide insights into the effectiveness of the current method of assessing writing skills in a school of business at a large regional…
Descriptors: Undergraduate Students, Business Administration Education, Business Schools, Writing Skills
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Wolfe, Edward W.; Manalo, Jonathan R. – ETS Research Report Series, 2005
This study examined scores from 133,906 operationally scored Test of English as a Foreign Language™ (TOEFL®) essays to determine whether the choice of composition medium has any impact on score quality for subgroups of test-takers. Results of analyses demonstrate that (a) scores assigned to word-processed essays are slightly more reliable than…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores