NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
David Rutkowski; Leslie Rutkowski; Greg Thompson; Yusuf Canbolat – Large-scale Assessments in Education, 2024
This paper scrutinizes the increasing trend of using international large-scale assessment (ILSA) data for causal inferences in educational research, arguing that such inferences are often tenuous. We explore the complexities of causality within ILSAs, highlighting the methodological constraints that challenge the validity of causal claims derived…
Descriptors: International Assessment, Data Use, Causal Models, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Jinnie Shin; Bowen Wang; Wallace N. Pinto Junior; Mark J. Gierl – Large-scale Assessments in Education, 2024
The benefits of incorporating process information in a large-scale assessment with the complex micro-level evidence from the examinees (i.e., process log data) are well documented in the research across large-scale assessments and learning analytics. This study introduces a deep-learning-based approach to predictive modeling of the examinee's…
Descriptors: Prediction, Models, Problem Solving, Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Jaime León; Fernando Martínez-Abad – Large-scale Assessments in Education, 2025
Background: Grade retention is an educational aspect that concerns teachers, families, and experts. It implies an economic cost for families, as well as a personal cost for the student, who is forced to study one more year. The objective of the study was to evaluate the effect of course repetition on math, science and reading competencies, and…
Descriptors: Grade Repetition, Academic Achievement, Scores, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity