NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Michael D. Carey; Stefan Szocs – Language Testing, 2024
This controlled experimental study investigated the interaction of variables associated with rating the pronunciation component of high-stakes English-language-speaking tests such as IELTS and TOEFL iBT. One hundred experienced raters who were all either familiar or unfamiliar with Brazilian-accented English or Papua New Guinean Tok Pisin-accented…
Descriptors: Dialects, Pronunciation, Suprasegmentals, Familiarity
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Jin Soo – Applied Language Learning, 2021
This study examined the impact of the manipulated task complexity (Robinson 2001a, 2001b, 2007, 2011; Robinson & Gilabert, 2007) on second language (L2) speech comprehensibility. I examined whether manipulated task complexity (a) impacts L2 speech comprehensibility, (b) aligns with L2 speakers' perception of task difficulty (cognitive…
Descriptors: Task Analysis, Second Language Learning, Second Language Instruction, Pronunciation
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmadi Shirazi, Masoumeh – SAGE Open, 2019
Threats to construct validity should be reduced to a minimum. If true, sources of bias, namely raters, items, tests as well as gender, age, race, language background, culture, and socio-economic status need to be spotted and removed. This study investigates raters' experience, language background, and the choice of essay prompt as potential…
Descriptors: Foreign Countries, Language Tests, Test Bias, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Lin; Davis, Larry; Tao, Jacob; Zechner, Klaus – Assessment in Education: Principles, Policy & Practice, 2021
Recent technology advancements have increased the prospects for automated spoken language technology to provide feedback on speaking performance. In this study we examined user perceptions of using an automated feedback system for preparing for the TOEFL iBT® test. Test takers and language teachers evaluated three types of machine-generated…
Descriptors: Audio Equipment, Test Preparation, Feedback (Response), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Becky; Alegre, Analucia; Eisenberg, Ann – Language Assessment Quarterly, 2016
The project aimed to examine the effect of raters' familiarity with accents on their judgments of non-native speech. Participants included three groups of raters who were either from Spanish Heritage, Spanish Non-Heritage, or Chinese Heritage backgrounds (n = 16 in each group) using Winke & Gass's (2013) definition of a heritage learner as…
Descriptors: Contrastive Linguistics, Evaluators, Chinese, Spanish
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Jing; Llosa, Lorena – Language Assessment Quarterly, 2015
This article reports on an investigation of the role raters' language background plays in raters' assessment of test takers' speaking ability. Specifically, this article examines differences between American and Indian raters in their scores and scoring processes when rating Indian test takers' responses to the Test of English as a Foreign…
Descriptors: North Americans, Indians, Evaluators, English (Second Language)
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Winke, Paula; Gass, Susan; Myford, Carol – ETS Research Report Series, 2011
This study investigated whether raters' second language (L2) background and the first language (L1) of test takers taking the TOEFL iBT® Speaking test were related through scoring. After an initial 4-hour training period, a group of 107 raters (mostly of learners of Chinese, Korean, and Spanish), listened to a selection of 432 speech samples that…
Descriptors: Second Language Learning, Evaluators, Speech Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xi, Xiaoming; Mollaun, Pam – ETS Research Report Series, 2006
This study explores the utility of analytic scoring for the TOEFL® Academic Speaking Test (TAST) in providing useful and reliable diagnostic information in three aspects of candidates' performance: delivery, language use, and topic development. G studies were used to investigate the dependability of the analytic scores, the distinctness of the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Oral Language