Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
ETS Research Report Series | 3 |
Educational Testing Service | 3 |
Applied Measurement in… | 2 |
Assessing Writing | 1 |
SAGE Open | 1 |
Author
Attali, Yigal | 5 |
Higgins, Derrick | 2 |
Ahmadi Shirazi, Masoumeh | 1 |
Arslan, Burcu | 1 |
Blanchard, Daniel | 1 |
Bridgeman, Brent | 1 |
Burstein, Jill | 1 |
Cahill, Aoife | 1 |
Camp, Roberta | 1 |
Carlson, Sybil B. | 1 |
Chodorow, Martin | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 7 |
Reports - Evaluative | 4 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 2 |
Guides - Non-Classroom | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Practitioners | 1 |
Researchers | 1 |
Location
Iran | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 16 |
Graduate Record Examinations | 5 |
Test of Written English | 2 |
Graduate Management Admission… | 1 |
International English… | 1 |
What Works Clearinghouse Rating
Finn, Bridgid; Arslan, Burcu; Walsh, Matthew – Applied Measurement in Education, 2020
To score an essay response, raters draw on previously trained skills and knowledge about the underlying rubric and score criterion. Cognitive processes such as remembering, forgetting, and skill decay likely influence rater performance. To investigate how forgetting influences scoring, we evaluated raters' scoring accuracy on TOEFL and GRE essays.…
Descriptors: Epistemology, Essay Tests, Evaluators, Cognitive Processes
Ahmadi Shirazi, Masoumeh – SAGE Open, 2019
Threats to construct validity should be reduced to a minimum. If true, sources of bias, namely raters, items, tests as well as gender, age, race, language background, culture, and socio-economic status need to be spotted and removed. This study investigates raters' experience, language background, and the choice of essay prompt as potential…
Descriptors: Foreign Countries, Language Tests, Test Bias, Essay Tests
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Kaplan, Randy M.; And Others – 1995
The increased use of constructed-response items, like essays, creates a need for tools to score these responses automatically in part or as a whole. This study explores one approach to analyzing essay-length natural language constructed-responses. A decision model for scoring essays was developed and evaluated. The decision model uses…
Descriptors: Computer Software, Constructed Response, Essay Tests, Grammar
Golub-Smith, Marna; And Others – 1993
The Test of Written English (TWE), administered with certain designated examinations of the Test of English as a Foreign Language (TOEFL), consists of a single essay prompt to which examinees have 30 minutes to respond. Questions have been raised about the comparability of different TWE prompts. This study was designed to elicit essays for prompts…
Descriptors: Charts, Comparative Analysis, English (Second Language), Essay Tests
Lee, Yong-Won – 2001
An essay test is now an integral part of the computer based Test of English as a Foreign Language (TOEFL-CBT). This paper provides a brief overview of the current TOEFL-CBT essay test, describes the operational procedures for essay scoring, including the Online Scoring Network (OSN) of the Educational Testing Service (ETS), and discusses major…
Descriptors: Computer Assisted Testing, English (Second Language), Essay Tests, Interrater Reliability
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Pike, Lewis W. – 1979
This evaluative and developmental study was undertaken between 1972-74 to determine the effectiveness of items used for the Test of English as a Foreign Language (TOEFL) in relationship to other item types used in assessing English proficiency, and to recommend possible changes in TOEFL content and format. TOEFL was developed to assess the English…
Descriptors: Cloze Procedure, English (Second Language), Essay Tests, Foreign Students
Educational Testing Service, Princeton, NJ. – 1990
This manual has been prepared for those responsible for interpreting scores on the Test of English as a Foreign Language (TOEFL). In addition to test interpretation information, the manual describes the test, explains the TOEFL program, and discusses program research activities. The TOEFL was developed in 1963 to test the English-language…
Descriptors: College Entrance Examinations, English (Second Language), Essay Tests, Expressive Language
Carlson, Sybil B.; Camp, Roberta – 1985
This paper reports on Educational Testing Service research studies investigating the parameters critical to reliability and validity in both the direct and indirect writing ability assessment of higher education applicants. The studies involved: (1) formulating an operational definition of writing competence; (2) designing and pretesting writing…
Descriptors: College Entrance Examinations, Computer Assisted Testing, English (Second Language), Essay Tests
Previous Page | Next Page »
Pages: 1 | 2