Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
English (Second Language) | 2 |
Language Tests | 2 |
Native Language | 2 |
Second Languages | 2 |
Bias | 1 |
Chinese | 1 |
Computer Assisted Testing | 1 |
Computer Software | 1 |
Correlation | 1 |
Dialects | 1 |
Essays | 1 |
More ▼ |
Author
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Gass, Susan | 1 |
Myford, Carol | 1 |
Ramineni, Chaitanya | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Winke, Paula | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Michigan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
What Works Clearinghouse Rating
Winke, Paula; Gass, Susan; Myford, Carol – Language Testing, 2013
Based on evidence that listeners may favor certain foreign accents over others (Gass & Varonis, 1984; Major, Fitzmaurice, Bunta, & Balasubramanian, 2002; Tauroza & Luk, 1997) and that language-test raters may better comprehend and/or rate the speech of test takers whose native languages (L1s) are more familiar on some level (Carey,…
Descriptors: Native Language, Bias, Dialects, Pronunciation
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software