NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Assessment Quarterly19
Audience
Practitioners1
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign…2
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Franz Holzknecht; Sandrine Tornay; Alessia Battisti; Aaron Olaf Batty; Katja Tissi; Tobias Haug; Sarah Ebling – Language Assessment Quarterly, 2024
Although automated spoken language assessment is rapidly growing, such systems have not been widely developed for signed languages. This study provides validity evidence for an automated web application that was developed to assess and give feedback on handshape and hand movement of L2 learners' Swiss German Sign Language signs. The study shows…
Descriptors: Sign Language, Vocabulary Development, Educational Assessment, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Mi Sun – Language Assessment Quarterly, 2020
In the present study, I examined the effects of rater characteristics, in particular, raters' familiarity with a foreign accent, on the assessment of second language (L2) pronunciation. Forty-three native English-speaking teachers were divided into three groups according to their reported types of familiarity with Korean accents: heritage,…
Descriptors: Evaluators, Familiarity, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Lockwood, Jane; Raquel, Michelle – Language Assessment Quarterly, 2019
Millions of customer services representatives are assessed each year by subject matter experts (e.g., recruiters, team leaders) in Asian contact centres to ensure good spoken communication skills when serving customers on the phones. In other workplace contexts, language experts are employed to do this work but in Asian contact centres, a…
Descriptors: English (Second Language), Second Language Learning, Language Skills, Telecommunications
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Shuai; Taguchi, Naoko; Xiao, Feng – Language Assessment Quarterly, 2019
Adopting Linacre's guidelines for evaluating rating scale effectiveness, we examined whether and how a six-point rating scale functioned differently across raters, speech acts, and second language (L2) proficiency levels. We developed a 12-item Computerized Oral Discourse Completion Task (CODCT) for assessing the production of requests, refusals,…
Descriptors: Speech Acts, Rating Scales, Guidelines, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Tengberg, Michael – Language Assessment Quarterly, 2018
Reading comprehension is often treated as a multidimensional construct. In many reading tests, items are distributed over reading process categories to represent the subskills expected to constitute comprehension. This study explores (a) the extent to which specified subskills of reading comprehension tests are conceptually conceivable to…
Descriptors: Reading Tests, Reading Comprehension, Scores, Test Results
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J. – Language Assessment Quarterly, 2014
Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…
Descriptors: Interrater Reliability, Correlation, Generalization, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmadi, Alireza; Sadeghi, Elham – Language Assessment Quarterly, 2016
In the present study we investigated the effect of test format on oral performance in terms of test scores and discourse features (accuracy, fluency, and complexity). Moreover, we explored how the scores obtained on different test formats relate to such features. To this end, 23 Iranian EFL learners participated in three test formats of monologue,…
Descriptors: Oral Language, Comparative Analysis, Language Fluency, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Assessment Quarterly, 2016
As a property of test scores, reliability/dependability constitutes an important psychometric consideration, and it underpins the validity of measurement results. A review of interpreter certification performance tests (ICPTs) reveals that (a) although reliability/dependability checking has been recognized as an important concern, its theoretical…
Descriptors: Foreign Countries, Scores, English, Chinese
Peer reviewed Peer reviewed
Direct linkDirect link
Casey, Laura B.; Miller, Neal D.; Stockton, Michelle B.; Justice, William V. – Language Assessment Quarterly, 2016
Many students struggle with writing; however, curriculum-based measures (CBM) of writing often use assessment criteria that focus primarily on mechanics. When academic development is assessed in this way, more complex aspects of a student's writing, such as the expression and development of ideas, may be neglected. The current study was a…
Descriptors: Elementary School Students, Writing (Composition), Writing Evaluation, Curriculum Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Hyun Jung – Language Assessment Quarterly, 2015
Human raters are normally involved in L2 performance assessment; as a result, rater behavior has been widely investigated to reduce rater effects on test scores and to provide validity arguments. Yet raters' cognition and use of rubrics in their actual rating have rarely been explored qualitatively in L2 speaking assessments. In this study three…
Descriptors: Qualitative Research, Comparative Analysis, Evaluators, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Gui, Min – Language Assessment Quarterly, 2012
This study explored whether American and Chinese English as a Foreign Language (EFL) teachers differ in their evaluations of student oral performance by examining the assessments of two groups of raters in an undergraduate speech competition. Each of the 21 contestants presented a 3-min prepared speech on a required topic, responded to a follow-up…
Descriptors: English Teachers, English (Second Language), College Faculty, Speech Communication
Peer reviewed Peer reviewed
Direct linkDirect link
Crossley, Scott; Clevinger, Amanda; Kim, YouJin – Language Assessment Quarterly, 2014
There has been a growing interest in the use of integrated tasks in the field of second language testing to enhance the authenticity of language tests. However, the role of text integration in test takers' performance has not been widely investigated. The purpose of the current study is to examine the effects of text-based relational (i.e.,…
Descriptors: Language Proficiency, Connected Discourse, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Hsieh, Mingchuan – Language Assessment Quarterly, 2013
The Yes/No Angoff and Bookmark method for setting standards on educational assessment are currently two of the most popular standard-setting methods. However, there is no research into the comparability of these two methods in the context of language assessment. This study compared results from the Yes/No Angoff and Bookmark methods as applied to…
Descriptors: Standard Setting (Scoring), Comparative Analysis, Language Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Isaacs, Talia; Thomson, Ron I. – Language Assessment Quarterly, 2013
This mixed-methods study examines the effects of rating scale length and rater experience on listeners' judgments of second-language (L2) speech. Twenty experienced and 20 novice raters, who were randomly assigned to 5-point or 9-point rating scale conditions, judged speech samples of 38 newcomers to Canada on numerical rating scales for…
Descriptors: Foreign Countries, Adults, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Cubilo, Justin; Winke, Paula – Language Assessment Quarterly, 2013
Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…
Descriptors: Listening Comprehension Tests, Visual Stimuli, Writing Tests, Video Technology
Previous Page | Next Page ยป
Pages: 1  |  2