Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Accuracy | 1 |
| Computer Assisted Testing | 1 |
| Correlation | 1 |
| English (Second Language) | 1 |
| Equated Scores | 1 |
| Error of Measurement | 1 |
| Essays | 1 |
| Evaluation Criteria | 1 |
| Goodness of Fit | 1 |
| Interrater Reliability | 1 |
| Language Tests | 1 |
| More ▼ | |
Source
| Educational Testing Service | 2 |
Publication Type
| Reports - Descriptive | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Haberman, Shelby J.; Dorans, Neil J. – Educational Testing Service, 2011
For testing programs that administer multiple forms within a year and across years, score equating is used to ensure that scores can be used interchangeably. In an ideal world, samples sizes are large and representative of populations that hardly change over time, and very reliable alternate test forms are built with nearly identical psychometric…
Descriptors: Scores, Reliability, Equated Scores, Test Construction


