NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaoting Shi; Xiaomei Ma; Wenbo Du; Xuliang Gao – Language Testing, 2024
Cognitive diagnostic assessment (CDA) intends to identify learners' strengths and weaknesses in latent cognitive attributes to provide personalized remedial instructions. Previous CDA studies on English as a Foreign Language (EFL)/English as a Second Language (ESL) writing have adopted dichotomous cognitive diagnostic models (CDMs) to analyze data…
Descriptors: Writing Evaluation, Writing Tests, Diagnostic Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca Sickinger; Tineke Brunfaut; John Pill – Language Testing, 2025
Comparative Judgement (CJ) is an evaluation method, typically conducted online, whereby a rank order is constructed, and scores calculated, from judges' pairwise comparisons of performances. CJ has been researched in various educational contexts, though only rarely in English as a Foreign Language (EFL) writing settings, and is generally agreed to…
Descriptors: Writing Evaluation, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Bibing; Huang, Liyan; Lu, Xiaofei – Language Testing, 2020
The continuation task, a new form of reading-writing integrated task in which test-takers read an incomplete story and then write the continuation and ending of the story, has been increasingly used in writing assessment, especially in China. However, language-test developers' understanding of the effects of important task-related factors on…
Descriptors: Cues, Writing Tests, Writing Evaluation, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Patekar, Jakob – Language Testing, 2021
Writing in a foreign language is a particularly difficult skill to develop, especially when young learners are concerned because they are parallelly learning to write in their L1 and do not have strong oral foundations in their L2. The issue becomes even more complex when the ways to assess young learners' writing are considered, given that…
Descriptors: Language Tests, Test Construction, Foreign Countries, Oral Language
Peer reviewed Peer reviewed
Direct linkDirect link
Sahan, Özgür; Razi, Salim – Language Testing, 2020
This study examines the decision-making behaviors of raters with varying levels of experience while assessing EFL essays of distinct qualities. The data were collected from 28 raters with varying levels of rating experience and working at the English language departments of different universities in Turkey. Using a 10-point analytic rubric, each…
Descriptors: Decision Making, Essays, Writing Evaluation, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Testing, 2011
Think-aloud protocols (TAPs) are frequently used in research on essay rating processes. However, there are very few empirical studies of the completeness of TAP data and the effects of this technique on rater performance (i.e., rating processes and outcomes). This study aims to start to address this research gap. As part of a larger study on rater…
Descriptors: Protocol Analysis, Rating Scales, Essays, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Youn-Hee – Language Testing, 2011
Despite the increasing interest in and need for test information for use in instructional practice and student learning, there have been few attempts to systematically link a diagnostic approach to English for academic purposes (EAP) writing instruction and assessment. In response to this need for research, this study examined the extent to which…
Descriptors: Performance Based Assessment, Performance Tests, Diagnostic Tests, Discriminant Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Matsuno, Sumie – Language Testing, 2009
Multifaceted Rasch measurement was used in the present study with 91 student and 4 teacher raters to investigate how self- and peer-assessments work in comparison with teacher assessments in actual university writing classes. The results indicated that many self-raters assessed their own writing lower than predicted. This was particularly true for…
Descriptors: Writing (Composition), Self Evaluation (Individuals), Student Evaluation, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Plakans, Lia – Language Testing, 2009
As integrated tasks become more common in assessing writing for academic purposes, it is necessary to investigate how test takers approach these tasks. The present study explores the processes of test takers undertaking reading-to-write tasks developed for a university English placement exam. Think-aloud protocols and interviews of…
Descriptors: Writing Evaluation, Protocol Analysis, Writing Tests, Writing Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Wigglesworth, Gillian; Storch, Neomy – Language Testing, 2009
The assessment of oral language is now quite commonly done in pairs or groups, and there is a growing body of research which investigates the related issues (e.g. May, 2007). Writing generally tends to be thought of as an individual activity, although a small number of studies have documented the advantages of collaboration in writing in the…
Descriptors: Formative Evaluation, Second Language Learning, Oral Language, Collaborative Writing
Peer reviewed Peer reviewed
Kondo-Brown, Kimi – Language Testing, 2002
Using FACETS, investigates how judgments of trained teacher raters are biased toward certain types of candidates and certain criteria in assessing Japanese second language writing. Explores the potential for using a modified version of a rating scale for norm-referenced decisions about Japanese second language writing ability. (Author/VWL)
Descriptors: Japanese, Language Teachers, Language Tests, Rating Scales
Peer reviewed Peer reviewed
Cumming, Alister – Language Testing, 2001
Interviewed teachers from around the world to examine a specific purpose (SP) versus general purpose (GP) distinction in their orientations to the work they do. The difference in orientation was signaled in the criteria the teachers use to assess students' writing. (Author)
Descriptors: Evaluation Criteria, Interviews, Language Teachers, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Arkoudis, Sophie; O'Loughlin, Kieran – Language Testing, 2004
This article reports on a collaborative study involving ESL teachers in an Australian English Language Centre as they work through some of their concerns about reliability and validity in their assessment practices. The focus of this article is on how teachers work with the Curriculum Standards Framework (CSF) as an assessment tool. The discussion…
Descriptors: Validity, English (Second Language), Second Language Learning, Immigrants
Peer reviewed Peer reviewed
Luoma, Sari; Tarnanen, Mirja – Language Testing, 2003
Reports on the development of a self-rating instrument for writing. The instrument engages learners in responding to a writing task and assessing their own proficiency against a set of benchmarks. Provides a description of the self-rating procedure, an account of instrument development, a report on a usability study with six learners of Finnish as…
Descriptors: Benchmarking, Finnish, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
Shi, Ling – Language Testing, 2001
Examined differences between native and nonnative English-as-a-foreign-language teachers' rating of the English writing of Chinese university students. Explored whether two groups of teachers--expatriates who typically speak English as their first language and ethnic Chinese with proficiency in English--gave similar scores to the same writing task…
Descriptors: Chinese, English (Second Language), Evaluation Criteria, Foreign Countries
Previous Page | Next Page »
Pages: 1  |  2