NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ray J. T. Liao; Renka Ohta; Kwangmin Lee – Language Testing, 2024
As integrated writing tasks in large-scale and classroom-based writing assessments have risen in popularity, research studies have increasingly concentrated on providing validity evidence. Given the fact that most of these studies focus on adult second language learners rather than younger ones, this study examined the relationship between written…
Descriptors: Writing (Composition), Writing Evaluation, English Language Learners, Discourse Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, You-Min; Y. Chen, Michelle – Language Testing, 2020
This study examined the writing score and writing feature changes of 562 repeat test takers who took the Canadian English Language Proficiency Index Program--General (CELPIP--General) test at least three times, with a short (30-40 day) interval between the first and second attempts and a longer (90-180 day) interval between the first and third…
Descriptors: Language Tests, Standardized Tests, Language Proficiency, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yan, Xun; Staples, Shelley – Language Testing, 2020
The argument-based approach to validity (Kane, 2013) focuses on two steps: (1) making claims about the proposed interpretation and use of test scores as a coherent, interpretive argument; and (2) evaluating those claims based on theoretical and empirical evidence related to test performances and scores. This paper discusses the role of…
Descriptors: Writing Tests, Language Tests, Language Proficiency, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hille, Kathryn; Cho, Yeonsuk – Language Testing, 2020
Accurate placement within levels of an ESL program is crucial for optimal teaching and learning. Commercially available tests are commonly used for placement, but their effectiveness has been found to vary. This study uses data from the Ohio Program of Intensive English (OPIE) at Ohio University to examine the value of two commercially available…
Descriptors: Student Placement, Testing, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Testing, 2019
This study aimed to examine the sources of variability in the second-language (L2) writing scores of test-takers who repeated an English language proficiency test, the Pearson Test of English (PTE) Academic, multiple times. Examining repeaters' test scores can provide important information concerning factors contributing to "changes" in…
Descriptors: Second Language Learning, Writing Tests, Scores, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Llosa, Lorena; Malone, Margaret E. – Language Testing, 2019
Investigating the comparability of students' performance on TOEFL writing tasks and actual academic writing tasks is essential to provide backing for the extrapolation inference in the TOEFL validity argument (Chapelle, Enright, & Jamieson, 2008). This study compared 103 international non-native-English-speaking undergraduate students'…
Descriptors: Computer Assisted Testing, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Powers, Donald E.; Powers, Andrew – Language Testing, 2015
Typically, English language proficiency tests yield multiple scores--usually for each of the four traditional language domains. In order to maximize the usefulness of test scores, they may need to be accompanied by information concerning how they complement one another. Using self-assessments by some 2300 TOEIC test takers, this study aimed to…
Descriptors: Language Tests, English (Second Language), Language Proficiency, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Bouwer, Renske; Béguin, Anton; Sanders, Ted; van den Bergh, Huub – Language Testing, 2015
In the present study, aspects of the measurement of writing are disentangled in order to investigate the validity of inferences made on the basis of writing performance and to describe implications for the assessment of writing. To include genre as a facet in the measurement, we obtained writing scores of 12 texts in four different genres for each…
Descriptors: Writing Tests, Generalization, Scores, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Sun-Young; Ewert, Doreen – Language Testing, 2015
Reading-to-write (RTW) tasks are becoming increasingly popular and have already been used in several high-stakes English proficiency exams, either replacing or complementing a prompt-based essay test. However, it is still not clear that what accounts for successful or unsuccessful performance on an integrated reading-writing task is owing to the…
Descriptors: English (Second Language), Language Tests, Language Proficiency, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Testing, 2014
A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of…
Descriptors: Language Tests, Computer Assisted Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
He, Ling; Shi, Ling – Language Testing, 2012
This study investigates the effects of topical knowledge on ESL (English as a Second Language) writing performance in the English Language Proficiency Index (LPI), a standardized English proficiency test used by many post-secondary institutions in western Canada. The participants were 50 students with different levels of English proficiency…
Descriptors: Writing Tests, Achievement Tests, Essays, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Gad S. – Language Testing, 2011
Raters are central to writing performance assessment, and rater development--training, experience, and expertise--involves a temporal dimension. However, few studies have examined new and experienced raters' rating performance longitudinally over multiple time points. This study uses operational data from the writing section of the MELAB (n =…
Descriptors: Expertise, Writing Evaluation, Performance Based Assessment, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sato, Takanori – Language Testing, 2012
The content that test-takers attempt to convey is not always included in the construct definition of "general" English oral proficiency tests, although some English-for-academic-purposes (EAP) speaking tests and most writing tests tend to place great emphasis on the evaluation of the content or ideas in the performance. This study…
Descriptors: Speech Communication, Writing Tests, Language Tests, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
East, Martin – Language Testing, 2007
Whether test takers should be allowed access to dictionaries when taking L2 tests has been the subject of debate for a good number of years. Opinions differ according to how the test construct is understood and whether the underlying value system favours process-orientated assessment for learning, with its concern to elicit the test takers' best…
Descriptors: Writing Tests, Reading Tests, Program Effectiveness, Dictionaries
Peer reviewed Peer reviewed
Direct linkDirect link
Gomez, Pablo Garcia; Noah, Aris; Schedl, Mary; Wright, Christine; Yolkut, Aline – Language Testing, 2007
Providing information to test takers and test score users about the abilities of test takers at different score levels has been a persistent problem in educational and psychological measurement (Carroll, 1993). Since the 1990s Educational Testing Service has been investigating solutions to this problem through the development of proficiency…
Descriptors: Reading Comprehension, Language Tests, Writing Tests, Reading Tests