Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Correlation | 5 |
Language Tests | 5 |
Accuracy | 2 |
Computer Assisted Testing | 2 |
English (Second Language) | 2 |
Foreign Countries | 2 |
Licensing Examinations… | 2 |
Scores | 2 |
Test Theory | 2 |
Test Validity | 2 |
Writing Tests | 2 |
More ▼ |
Source
Educational Testing Service | 5 |
Author
Haberman, Shelby J. | 2 |
Attali, Yigal | 1 |
Kim, Hae-Jin | 1 |
Kim, Sooyeon | 1 |
Powers, Donald E. | 1 |
Sinharay, Sandip | 1 |
Stricker, Lawrence J. | 1 |
VanWinkle, Waverely | 1 |
Walker, Michael E. | 1 |
Weng, Vincent Z. | 1 |
Yu, Feng | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 3 |
Reports - Descriptive | 1 |
Reports - Research | 1 |
Education Level
Higher Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Haberman, Shelby J.; Sinharay, Sandip – Educational Testing Service, 2011
Subscores are reported for several operational assessments. Haberman (2008) suggested a method based on classical test theory to determine if the true subscore is predicted better by the corresponding subscore or the total score. Researchers are often interested in learning how different subgroups perform on subtests. Stricker (1993) and…
Descriptors: True Scores, Test Theory, Prediction, Group Membership
Kim, Sooyeon; Walker, Michael E. – Educational Testing Service, 2011
This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…
Descriptors: Test Format, Multiple Choice Tests, Test Items, Gender Differences
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Powers, Donald E.; Kim, Hae-Jin; Yu, Feng; Weng, Vincent Z.; VanWinkle, Waverely – Educational Testing Service, 2009
To facilitate the interpretation of test scores from the new TOEIC[R] (Test of English for International Communications[TM]) speaking and writing tests as measures of English-language proficiency, we administered a self-assessment inventory to TOEIC examinees in Japan and Korea, to gather their perceptions of their ability to perform a variety of…
Descriptors: English for Special Purposes, Language Tests, Writing Tests, Speech Tests
Stricker, Lawrence J.; Attali, Yigal – Educational Testing Service, 2010
The principal aims of this study, a conceptual replication of an earlier investigation of the TOEFL[R] computer-based test, or TOEFL CBT, in Buenos Aires, Cairo, and Frankfurt, were to assess test takers' reported acceptance of the TOEFL Internet-based test, or TOEFL iBT[TM], and its associations with possible determinants of this acceptance and…
Descriptors: Computer Attitudes, Questionnaires, Comparative Analysis, Foreign Countries