Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
ETS Research Report Series | 2 |
Assessment in Education:… | 1 |
College Entrance Examination… | 1 |
Education and Information… | 1 |
Educational Testing Service | 1 |
Author
Carlson, Sybil B. | 2 |
Attali, Yigal | 1 |
Breland, Hunter M. | 1 |
Bridgeman, Brent | 1 |
Camp, Roberta | 1 |
Davis, Larry | 1 |
Fowles, Mary E. | 1 |
Gentile, Claudia | 1 |
Haberman, Shelby J. | 1 |
Kantor, Robert | 1 |
Lee, Yong-Won | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 4 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Toroujeni, Seyyed Morteza Hashemi – Education and Information Technologies, 2022
Score interchangeability of Computerized Fixed-Length Linear Testing (henceforth CFLT) and Paper-and-Pencil-Based Testing (henceforth PPBT) has become a controversial issue over the last decade when technology has meaningfully restructured methods of the educational assessment. Given this controversy, various testing guidelines published on…
Descriptors: Computer Assisted Testing, Reading Tests, Reading Comprehension, Scoring
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Carlson, Sybil B.; Camp, Roberta – 1985
This paper reports on Educational Testing Service research studies investigating the parameters critical to reliability and validity in both the direct and indirect writing ability assessment of higher education applicants. The studies involved: (1) formulating an operational definition of writing competence; (2) designing and pretesting writing…
Descriptors: College Entrance Examinations, Computer Assisted Testing, English (Second Language), Essay Tests
Carlson, Sybil B.; And Others – 1985
Four writing samples were obtained from 638 foreign college applicants who represented three major foreign language groups (Arabic, Chinese, and Spanish), and from 60 native English speakers. All four were scored holistically, two were also scored for sentence-level and discourse-level skills, and some were scored by the Writer's Workbench…
Descriptors: Arabic, Chinese, College Entrance Examinations, Computer Software
Breland, Hunter M.; Bridgeman, Brent; Fowles, Mary E. – College Entrance Examination Board, 1999
A comprehensive review was conducted of writing research literature and writing test program activities in a number of testing programs. The review was limited to writing assessments used for admission in higher education. Programs reviewed included ACT, Inc.'s ACT™ program, the California State Universities and Colleges (CSUC) testing program,…
Descriptors: Writing Research, Writing Tests, Writing (Composition), Writing Instruction