NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 79 results Save | Export
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Flint, Kaitlyn; Spaulding, Tammie J. – Language, Speech, and Hearing Services in Schools, 2021
Purpose: The readability and comprehensibility of Learner's Permit Knowledge Test practice questions and the relationship with test failure rates across states and the District of Columbia were examined. Method: Failure rates were obtained from department representatives. Practice test questions were extracted from drivers' manuals and department…
Descriptors: Correlation, Readability Formulas, Reading Comprehension, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozyeter, Neslihan Tugce – International Journal of Assessment Tools in Education, 2022
In education, examining students' learning in detail, determining their strengths and weaknesses and giving effective feedback have gained importance over time. The aim of this study is to determine the distribution of students' answers to the reading comprehension achievement test items which were written at different cognitive levels and to…
Descriptors: Student Evaluation, Feedback (Response), Scoring Rubrics, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Steedle, Jeffrey T.; Frantz, Roger S. – Applied Measurement in Education, 2022
Item difficulty modeling studies involve (a) hypothesizing item features, or item response demands, that are likely to predict item difficulty with some degree of accuracy; and (b) entering the features as independent variables into a regression equation or other statistical model to predict difficulty. In this review, we report findings from 13…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huu Thanh Minh Nguyen; Nguyen Van Anh Le – TESL-EJ, 2024
Comparing language tests and test preparation materials holds important implications for the latter's validity and reliability. However, not enough studies compare such materials across a wide range of indices. Therefore, this study investigated the text complexity of IELTS academic reading tests (IRT) and IELTS reading practice tests (IRPrT).…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Readability
Peer reviewed Peer reviewed
Direct linkDirect link
Monika Grotek; Agnieszka Slezak-Swiat – Reading in a Foreign Language, 2024
The study investigates the effect of the perception of text and task difficulty on adults' performance in reading tests in L1 and L2. The relationship between the following variables is studied: (a) readers' perception of text and task difficulty in L1 and L2 measured in a self-reported post-task questionnaire, (b) the number of correct answers to…
Descriptors: Difficulty Level, Second Language Learning, Eye Movements, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Anthony; Nekrasova-Beker, Tatiana – Educational Assessment, 2018
While previous research has identified numerous factors that contribute to item difficulty, studies involving large-scale reading tests have provided mixed results. This study examined five selected-response item types used to measure reading comprehension in the Pearson Test of English Academic: a) multiple-choice (choose one answer), b)…
Descriptors: Reading Comprehension, Test Items, Reading Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Geramipour, Masoud – Language Testing in Asia, 2021
Rasch testlet and bifactor models are two measurement models that could deal with local item dependency (LID) in assessing the dimensionality of reading comprehension testlets. This study aimed to apply the measurement models to real item response data of the Iranian EFL reading comprehension tests and compare the validity of the bifactor models…
Descriptors: Foreign Countries, Second Language Learning, English (Second Language), Reading Tests
Ping Wang – ProQuest LLC, 2021
According to the RAND model framework, reading comprehension test performance is influenced by readers' reading skills or reader characteristics, test properties, and their interactions. However, little empirical research has systematically compared the impacts of reader characteristics, test properties, and reader-test interactions across…
Descriptors: Reading Comprehension, Reading Tests, Reading Research, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Özdemir, Ezgi Çetinkaya; Akyol, Hayati – Universal Journal of Educational Research, 2019
Reading comprehension has an important place in lifelong learning. It is an interactive process between the reader and the text. Students need reading comprehension skills at all educational levels and for all school subjects. Determining the level of students' reading comprehension skills is the subject of testing and evaluation. Tests used to…
Descriptors: Reading Comprehension, Reading Tests, Test Construction, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Woodcock, Stuart; Howard, Steven J.; Ehrich, John – School Psychology, 2020
Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary…
Descriptors: Elementary School Students, Grade 3, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Tremblay, Kathryn A.; Binder, Katherine S.; Ardoin, Scott P.; Talwar, Amani; Tighe, Elizabeth L. – Journal of Research in Reading, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Grade 3, Elementary School Students, Reading Comprehension, Decoding (Reading)
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Inn-Chull; Moon, Youngsun – Language Assessment Quarterly, 2020
This study examines the relationships among various major factors that may affect the difficulty level of language tests in an attempt to enhance the robustness of item difficulty estimation, which constitutes a crucial factor ensuring the equivalency of high-stakes tests. The observed difficulties of the reading and listening sections of two EFL…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Crible, Ludivine; Pickering, Martin J. – Discourse Processes: A Multidisciplinary Journal, 2020
This study aims to establish whether the processing of different connectives (e.g., "and," "but") and different coherence relations (addition, contrast) can be modulated by a structural feature of the connected segments--namely, parallelism. While "but" is mainly used to contrast two expressions, "and"…
Descriptors: Language Processing, Difficulty Level, Form Classes (Languages), Verbs
Peer reviewed Peer reviewed
Direct linkDirect link
Sheehan, Kathleen M. – Educational Measurement: Issues and Practice, 2017
Automated text complexity measurement tools (also called readability metrics) have been proposed as a way to help teachers, textbook publishers, and assessment developers select texts that are closely aligned with the new, more demanding text complexity expectations specified in the Common Core State Standards. This article examines a critical…
Descriptors: Reading Material Selection, Difficulty Level, Common Core State Standards, Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6