Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 18 |
Descriptor
Scoring | 22 |
Language Tests | 18 |
Second Language Learning | 18 |
Computer Assisted Testing | 11 |
Testing | 11 |
English (Second Language) | 10 |
Evaluators | 7 |
Oral Language | 7 |
Test Validity | 7 |
Comparative Analysis | 5 |
Correlation | 5 |
More ▼ |
Source
Language Testing | 22 |
Author
Han, Chao | 2 |
Mollaun, Pamela | 2 |
Schmitt, Norbert | 2 |
Xi, Xiaoming | 2 |
Bachman, Lyle F. | 1 |
Bailey, Kathleen M. | 1 |
Bond, Trevor | 1 |
Bridgeman, Brent | 1 |
Brown, Alan V. | 1 |
Brown, Anne | 1 |
Chan, Kinnie Kin Yee | 1 |
More ▼ |
Publication Type
Journal Articles | 22 |
Reports - Research | 14 |
Reports - Evaluative | 4 |
Reports - Descriptive | 2 |
Tests/Questionnaires | 2 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 4 |
Secondary Education | 2 |
Elementary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
China | 3 |
Japan | 3 |
United Kingdom | 2 |
Brazil | 1 |
Kenya | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
ACTFL Oral Proficiency… | 1 |
What Works Clearinghouse Rating
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Cox, Troy L.; Brown, Alan V.; Thompson, Gregory L. – Language Testing, 2023
The rating of proficiency tests that use the Inter-agency Roundtable (ILR) and American Council on the Teaching of Foreign Languages (ACTFL) guidelines claims that each major level is based on hierarchal linguistic functions that require mastery of multidimensional traits in such a way that each level subsumes the levels beneath it. These…
Descriptors: Oral Language, Language Fluency, Scoring, Cues
Han, Chao; Xiao, Xiaoyan – Language Testing, 2022
The quality of sign language interpreting (SLI) is a gripping construct among practitioners, educators and researchers, calling for reliable and valid assessment. There has been a diverse array of methods in the extant literature to measure SLI quality, ranging from traditional error analysis to recent rubric scoring. In this study, we want to…
Descriptors: Comparative Analysis, Sign Language, Deaf Interpreting, Evaluators
Schmidgall, Jonathan E.; Getman, Edward P.; Zu, Jiyun – Language Testing, 2018
In this study, we define the term "screener test," elaborate key considerations in test design, and describe how to incorporate the concepts of practicality and argument-based validation to drive an evaluation of screener tests for language assessment. A screener test is defined as a brief assessment designed to identify an examinee as a…
Descriptors: Test Validity, Test Use, Test Construction, Language Tests
Egbert, Jesse – Language Testing, 2017
The use of corpora and corpus linguistic methods in language testing research is increasing at an accelerated pace. The growing body of language testing research that uses corpus linguistic data is a testament to their utility in test development and validation. Although there are many reasons to be optimistic about the future of using corpus data…
Descriptors: Language Tests, Second Language Learning, Computational Linguistics, Best Practices
van Compernolle, Rémi A.; Zhang, Haomin – Language Testing, 2014
The focus of this paper is on the design, administration, and scoring of a dynamically administered elicited imitation test of L2 English morphology. Drawing on Vygotskian sociocultural psychology, particularly the concepts of zone of proximal development and dynamic assessment, we argue that support provided during the elicited imitation test…
Descriptors: Alternative Assessment, Imitation, English (Second Language), Language Tests
Ling, Guangming; Mollaun, Pamela; Xi, Xiaoming – Language Testing, 2014
The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect human performance in general, yet little is known about its effects on a human rater's scoring quality on constructed responses. In this study, we compared…
Descriptors: Evaluators, Fatigue (Biology), Scoring, Performance
Leaper, David A.; Riazi, Mehdi – Language Testing, 2014
This paper reports an investigation into how the prompt may influence the discourse of group oral tests. The group oral test, in which three or four participants are rated on their ability to discuss a prompt, is a format for assessing the spoken ability of language learners. In this study, 141 Japanese university students were videoed in 41 group…
Descriptors: Oral Language, Language Tests, Second Language Learning, Prompting
Wang, Huan; Choi, Ikkyu; Schmidgall, Jonathan; Bachman, Lyle F. – Language Testing, 2012
This review departs from current practice in reviewing tests in that it employs an "argument-based approach" to test validation to guide the review (e.g. Bachman, 2005; Kane, 2006; Mislevy, Steinberg, & Almond, 2002). Specifically, it follows an approach to test development and use that Bachman and Palmer (2010) call the process of "assessment…
Descriptors: Evidence, Stakeholders, Test Construction, Test Use
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David – Language Testing, 2012
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Descriptors: Scoring, Classification, Weighted Scores, Comparative Analysis
Pellicer-Sanchez, Ana; Schmitt, Norbert – Language Testing, 2012
Despite a number of research studies investigating the Yes-No vocabulary test format, one main question remains unanswered: What is the best scoring procedure to adjust for testee overestimation of vocabulary knowledge? Different scoring methodologies have been proposed based on the inclusion and selection of nonwords in the test. However, there…
Descriptors: Language Tests, Scoring, Reaction Time, Vocabulary Development
Roever, Carsten – Language Testing, 2011
Testing of second language pragmatic competence is an underexplored but growing area of second language assessment. Tests have focused on assessing learners' sociopragmatic and pragmalinguistic abilities but the speech act framework informing most current productive testing instruments in interlanguage pragmatics has been criticized for…
Descriptors: Speech Acts, Second Language Learning, Interlanguage, Testing
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Previous Page | Next Page »
Pages: 1 | 2