Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
English (Second Language) | 4 |
Evaluation Criteria | 4 |
Evaluators | 4 |
Language Tests | 4 |
Native Language | 4 |
Second Language Learning | 4 |
Scoring | 3 |
Computer Assisted Testing | 2 |
Essays | 2 |
Mixed Methods Research | 2 |
Pronunciation | 2 |
More ▼ |
Author
Abbasi, Abbas | 1 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Ghanbari, Nasim | 1 |
Heidari, Nasim | 1 |
Jamieson, Joan | 1 |
Llosa, Lorena | 1 |
Poonpon, Kornwipa | 1 |
Ramineni, Chaitanya | 1 |
Trapani, Catherine S. | 1 |
Wei, Jing | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 3 |
International English… | 1 |
What Works Clearinghouse Rating
Heidari, Nasim; Ghanbari, Nasim; Abbasi, Abbas – Language Testing in Asia, 2022
It is widely believed that human rating performance is influenced by an array of different factors. Among these, rater-related variables such as experience, language background, perceptions, and attitudes have been mentioned. One of the important rater-related factors is the way the raters interact with the rating scales. In particular, how raters…
Descriptors: Evaluators, Rating Scales, Language Tests, English (Second Language)
Wei, Jing; Llosa, Lorena – Language Assessment Quarterly, 2015
This article reports on an investigation of the role raters' language background plays in raters' assessment of test takers' speaking ability. Specifically, this article examines differences between American and Indian raters in their scores and scoring processes when rating Indian test takers' responses to the Test of English as a Foreign…
Descriptors: North Americans, Indians, Evaluators, English (Second Language)
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Jamieson, Joan; Poonpon, Kornwipa – ETS Research Report Series, 2013
Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…
Descriptors: Oral Language, Language Proficiency, Scaling, Scores