NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Anja Riemenschneider; Zarah Weiss; Pauline Schröter; Detmar Meurers – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2024
The linguistic characteristics of text productions depend on various factors, including individual language proficiency as well as the tasks used to elicit the production. To date, little attention has been paid to whether some writing tasks are more suitable than others to represent and differentiate students' proficiency levels. This issue is…
Descriptors: English (Second Language), Writing (Composition), Difficulty Level, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Tsang, Chi Lai; Isaacs, Talia – Language Testing, 2022
This sequential mixed-methods study investigates washback on learning in a high-stakes school exit examination by examining learner perceptions and reported behaviours in relation to learners' beliefs and language learning experience, the role of other stakeholders in the washback mechanism, and socio-educational forces. The focus is the graded…
Descriptors: Foreign Countries, Secondary School Students, Student Attitudes, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yumei Zou; Sathiamoorthy Kannan; Gurnam Kaur Sidhu – SAGE Open, 2024
Task design has been viewed to be essential in the context of language assessment. This study investigated whether increasing task complexity affects learners' writing performance. It employs three writing tasks with different levels of complexity based on Robinson's Componential Framework. A cohort of 278 participants was selected using a simple…
Descriptors: Difficulty Level, College Students, Foreign Countries, Writing Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Omarov, Nazarbek Bakytbekovich; Mohammed, Aisha; Alghurabi, Ammar Muhi Khleel; Alallo, Hajir Mahmood Ibrahim; Ali, Yusra Mohammed; Hassan, Aalaa Yaseen; Demeuova, Lyazat; Viktorovna, Shvedova Irina; Nazym, Bekenova; Al Khateeb, Nashaat Sultan Afif – International Journal of Language Testing, 2023
The Multiple-choice (MC) item format is commonly used in educational assessments due to its economy and effectiveness across a variety of content domains. However, numerous studies have examined the quality of MC items in high-stakes and higher-education assessments and found many flawed items, especially in terms of distractors. These faulty…
Descriptors: Test Items, Multiple Choice Tests, Item Response Theory, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Srisunakrua, Thanaporn; Chumworatayee, Tipamas – Arab World English Journal, 2019
Readability has long been regarded as a significant aspect in English language teaching as it provides the overall picture of a text's difficulty level, especially in the context of teaching and testing. Readability is a practical consideration when making decisions on materials to match a text with target readers' proficiency. However, few…
Descriptors: Readability Formulas, English (Second Language), Textbook Content, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Inn-Chull; Moon, Youngsun – Language Assessment Quarterly, 2020
This study examines the relationships among various major factors that may affect the difficulty level of language tests in an attempt to enhance the robustness of item difficulty estimation, which constitutes a crucial factor ensuring the equivalency of high-stakes tests. The observed difficulties of the reading and listening sections of two EFL…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yu, Xiaoli – International Journal of Language Testing, 2021
This study examined the development of text complexity for the past 25 years of reading comprehension passages in the National Matriculation English Test (NMET) in China. Text complexity of 206 reading passages at lexical, syntactic, and discourse levels has been measured longitudinally and compared across the years. The natural language…
Descriptors: Reading Comprehension, Reading Tests, Difficulty Level, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Saif, Shahrzad; Ma, Jia; May, Lyn; Cheng, Liying – Assessment in Education: Principles, Policy & Practice, 2021
Effective test preparation for high-stakes English language tests is crucial for candidates whose futures depend on attaining a particular score. An increasing number of studies have investigated the role of test preparation; however, these studies have been exclusively conducted in individual contexts and countries around the world. This study…
Descriptors: Difficulty Level, Test Preparation, High Stakes Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Yan; Yan, Ming – Language Assessment Quarterly, 2017
One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…
Descriptors: Writing Tests, Computer Assisted Testing, Computer Literacy, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghorbanchian, Elahe; Youhanaee, Manijeh; Amirian, Zahra – English Language Teaching, 2019
Cognitive complexity is traditionally used for describing human cognition along a simplicity-complexity axis in tests like TOFEL IBT and GRE where text creation and rhetorical organization are quintessentially important. Accordingly, this study sought to investigate the impact of mentor text modelling on cognitive complexity of academic writing…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Khabbazbashi, Nahal – Language Testing, 2017
This study explores the extent to which topic and background knowledge of topic affect spoken performance in a high-stakes speaking test. It is argued that evidence of a substantial influence may introduce construct-irrelevant variance and undermine test fairness. Data were collected from 81 non-native speakers of English who performed on 10…
Descriptors: Speech Tests, High Stakes Tests, English (Second Language), Language Proficiency
Lesnov, Roman O. – ProQuest LLC, 2018
Despite the growing recognition that second language (L2) listening is a skill incorporating the ability to process visual information along with the auditory stimulus, standardized L2 listening assessments have been predominantly operationalizing this language skill as visual-free (Buck, 2001; Kang, Gutierrez Arvizu, Chaipuapae, & Lesnov,…
Descriptors: Academic Discourse, Second Language Learning, Listening Comprehension Tests, Video Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Stuart; Imam, Helen – Language Assessment Quarterly, 2013
International assessments in a wide range of subjects are being prepared for and delivered through the medium of English in a variety of educational contexts. These assessments are taken by many candidates whose first language is not necessarily English. This raises important issues relating to assessment validity and fairness. This study…
Descriptors: English (Second Language), Test Validity, Test Bias, High Stakes Tests
Previous Page | Next Page »
Pages: 1  |  2