Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 19 |
Descriptor
Evaluators | 20 |
Scoring | 20 |
Second Language Learning | 15 |
Language Tests | 13 |
English (Second Language) | 8 |
Oral Language | 8 |
Writing Evaluation | 7 |
Computer Assisted Testing | 6 |
Correlation | 6 |
Scores | 6 |
Essays | 5 |
More ▼ |
Source
Language Testing | 20 |
Author
Mollaun, Pamela | 2 |
Xi, Xiaoming | 2 |
Barkaoui, Khaled | 1 |
Bond, Trevor | 1 |
Bridgeman, Brent | 1 |
Brown, Alan V. | 1 |
Brown, Anne | 1 |
Brunfaut, Tineke | 1 |
Carey, Michael D. | 1 |
Chan, Kinnie Kin Yee | 1 |
Chan, Sathena | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 19 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Audience
Location
China | 3 |
Colombia | 1 |
India | 1 |
South Korea | 1 |
Turkey | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
ACTFL Oral Proficiency… | 1 |
What Works Clearinghouse Rating
Lestari, Santi B.; Brunfaut, Tineke – Language Testing, 2023
Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating…
Descriptors: Reading Writing Relationship, Writing Tests, Rating Scales, Writing Processes
Shin, Jinnie; Gierl, Mark J. – Language Testing, 2021
Automated essay scoring (AES) has emerged as a secondary or as a sole marker for many high-stakes educational assessments, in native and non-native testing, owing to remarkable advances in feature engineering using natural language processing, machine learning, and deep-neural algorithms. The purpose of this study is to compare the effectiveness…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Chan, Sathena; May, Lyn – Language Testing, 2023
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that…
Descriptors: Scoring, Writing Evaluation, Reading Tests, Listening Skills
Cox, Troy L.; Brown, Alan V.; Thompson, Gregory L. – Language Testing, 2023
The rating of proficiency tests that use the Inter-agency Roundtable (ILR) and American Council on the Teaching of Foreign Languages (ACTFL) guidelines claims that each major level is based on hierarchal linguistic functions that require mastery of multidimensional traits in such a way that each level subsumes the levels beneath it. These…
Descriptors: Oral Language, Language Fluency, Scoring, Cues
Li, Shuai; Wen, Ting; Li, Xian; Feng, Yali; Lin, Chuan – Language Testing, 2023
This study compared holistic and analytic marking methods for their effects on parameter estimation (of examinees, raters, and items) and rater cognition in assessing speech act production in L2 Chinese. Seventy American learners of Chinese completed an oral Discourse Completion Test assessing requests and refusals. Four first-language (L1)…
Descriptors: Speech Acts, Second Language Learning, Second Language Instruction, Chinese
Han, Chao; Xiao, Xiaoyan – Language Testing, 2022
The quality of sign language interpreting (SLI) is a gripping construct among practitioners, educators and researchers, calling for reliable and valid assessment. There has been a diverse array of methods in the extant literature to measure SLI quality, ranging from traditional error analysis to recent rubric scoring. In this study, we want to…
Descriptors: Comparative Analysis, Sign Language, Deaf Interpreting, Evaluators
Sahan, Özgür; Razi, Salim – Language Testing, 2020
This study examines the decision-making behaviors of raters with varying levels of experience while assessing EFL essays of distinct qualities. The data were collected from 28 raters with varying levels of rating experience and working at the English language departments of different universities in Turkey. Using a 10-point analytic rubric, each…
Descriptors: Decision Making, Essays, Writing Evaluation, Evaluators
Lin, Chih-Kai – Language Testing, 2017
Sparse-rated data are common in operational performance-based language tests, as an inevitable result of assigning examinee responses to a fraction of available raters. The current study investigates the precision of two generalizability-theory methods (i.e., the rating method and the subdividing method) specifically designed to accommodate the…
Descriptors: Data Analysis, Language Tests, Generalizability Theory, Accuracy
Trace, Jonathan; Janssen, Gerriet; Meier, Valerie – Language Testing, 2017
Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require…
Descriptors: Performance Based Assessment, Second Language Learning, Scoring, Evaluators
In'nami, Yo; Koizumi, Rie – Language Testing, 2016
We addressed Deville and Chalhoub-Deville's (2006), Schoonen's (2012), and Xi and Mollaun's (2006) call for research into the contextual features that are considered related to person-by-task interactions in the framework of generalizability theory in two ways. First, we quantitatively synthesized the generalizability studies to determine the…
Descriptors: Evaluators, Second Language Learning, Writing Skills, Oral Language
Ling, Guangming; Mollaun, Pamela; Xi, Xiaoming – Language Testing, 2014
The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect human performance in general, yet little is known about its effects on a human rater's scoring quality on constructed responses. In this study, we compared…
Descriptors: Evaluators, Fatigue (Biology), Scoring, Performance
Kuiken, Folkert; Vedder, Ineke – Language Testing, 2014
This study investigates the relationship in L2 writing between raters' judgments of communicative adequacy and linguistic complexity by means of six-point Likert scales, and general measures of linguistic performance. The participants were 39 learners of Italian and 32 of Dutch, who wrote two short argumentative essays. The same writing tasks…
Descriptors: Writing Evaluation, Second Language Learning, Evaluators, Native Language
Carey, Michael D.; Mannell, Robert H.; Dunn, Peter K. – Language Testing, 2011
This study investigated factors that could affect inter-examiner reliability in the pronunciation assessment component of speaking tests. We hypothesized that the rating of pronunciation is susceptible to variation in assessment due to the amount of exposure examiners have to nonnative English accents. An inter-rater variability analysis was…
Descriptors: Oral Language, Pronunciation, Phonology, Interlanguage
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Previous Page | Next Page »
Pages: 1 | 2