NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Sun; Stella Yun Kim – Educational and Psychological Measurement, 2024
Equating is a statistical procedure used to adjust for the difference in form difficulty such that scores on those forms can be used and interpreted comparably. In practice, however, equating methods are often implemented without considering the extent to which two forms differ in difficulty. The study aims to examine the effect of the magnitude…
Descriptors: Difficulty Level, Data Interpretation, Equated Scores, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Pengelley, James; Whipp, Peter R.; Rovis-Hermann, Nina – Educational Psychology Review, 2023
The aim of the present study is to reconcile previous findings (a) that testing mode has no effect on test outcomes or cognitive load (Comput Hum Behav 77:1-10, 2017) and (b) that younger learners' working memory processes are more sensitive to computer-based test formats (J Psychoeduc Assess 37(3):382-394, 2019). We addressed key methodological…
Descriptors: Scores, Cognitive Processes, Difficulty Level, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Cronin, Sean D. – ProQuest LLC, 2023
This convergent, parallel, mixed-methods study with qualitative and quantitative content analysis methods was conducted to identify what type of thinking is required by the College and Career Readiness Assessment (CCRA+) by (a) determining the frequency and percentage of questions categorized as higher-level thinking within each cell of Hess'…
Descriptors: Cues, College Readiness, Career Readiness, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hryvko, Antonina V.; Zhuk, Yurii O. – Journal of Curriculum and Teaching, 2022
A feature of the presented study is a comprehensive approach to studying the reliability problem of linguistic testing results due to the several functional and variable factors impact. Contradictions and ambiguous views of scientists on the researched issues determine the relevance of this study. The article highlights the problem of equivalence…
Descriptors: Student Evaluation, Language Tests, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Matejak Cvenic, Karolina; Planinic, Maja; Susac, Ana; Ivanjek, Lana; Jelicic, Katarina; Hopf, Martin – Physical Review Physics Education Research, 2022
A new diagnostic instrument, the Conceptual Survey on Wave Optics (CSWO), was developed and validated on 224 high school students (aged 18-19 years) in Croatia. The process of test construction, which included the testing of 61 items on the total of 712 students is presented. The final version of the test consists of 26 multiple-choice items which…
Descriptors: Scientific Concepts, Concept Formation, Validity, Physics
Magdalen Beiting-Parrish – ProQuest LLC, 2022
The following is a five-chapter dissertation surrounding the use of text mining techniques for better understanding the language of mathematics items from standardized tests to improve linguistic equity of these items to support assessment of English Language Learners. Introduction: The dissertation begins with an overview of the problem that…
Descriptors: Mathematics Tests, Test Items, Item Analysis, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Güler, Mustafa – Journal of Pedagogical Research, 2021
The extent to which the targeted outcomes in education are achieved can be determined by the educational assessment process. Although various alternative ways of assessment have arisen in recent decades, written examinations are still widely used by teachers. This study aims to determine the quality of the questions used by middle school…
Descriptors: Middle School Teachers, Mathematics Teachers, Middle School Mathematics, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; Ketterlin-Geller, Leanne R. – Practical Assessment, Research & Evaluation, 2020
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically…
Descriptors: Formative Evaluation, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jonathan Trace – Language Teaching Research Quarterly, 2023
The role of context in cloze tests has long been seen as both a benefit as well as a complication in their usefulness as a measure of second language comprehension (Brown, 2013). Passage cohesion, in particular, would seem to have a relevant and important effect on the degree to which cloze items function and the interpretability of performances…
Descriptors: Language Tests, Cloze Procedure, Connected Discourse, Test Items
Lina Anaya; Nagore Iriberri; Pedro Rey-Biel; Gema Zamarro – Annenberg Institute for School Reform at Brown University, 2021
Standardized assessments are widely used to determine access to educational resources with important consequences for later economic outcomes in life. However, many design features of the tests themselves may lead to psychological reactions influencing performance. In particular, the level of difficulty of the earlier questions in a test may…
Descriptors: Test Construction, Test Wiseness, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa; Öztürk, Nagihan Boztunç; Sahin, Melek Gülsah – Participatory Educational Research, 2020
In this research, the effect of an item's type and cognitive level on its difficulty index was investigated. The data source of the study consisted of the responses of the 12535 students in the Turkey sample (6079 and 6456 students from eighth and fourth grade respectively) of TIMSS 2015. The responses were a total of 215 items at the eighth-grade…
Descriptors: Test Items, Difficulty Level, Cognitive Processes, Responses
Previous Page | Next Page »
Pages: 1  |  2  |  3