Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 21 |
Since 2006 (last 20 years) | 24 |
Descriptor
Source
Large-scale Assessments in… | 24 |
Author
Lüdtke, Oliver | 3 |
Hecht, Martin | 2 |
Robitzsch, Alexander | 2 |
Benjamini, Yoav | 1 |
Bos, Wilfried | 1 |
Bouhlila, Donia Smaali | 1 |
Bowen Wang | 1 |
Braun, Henry | 1 |
Bulut, Okan | 1 |
Contini, Dalit | 1 |
Cugnata, Federica | 1 |
More ▼ |
Publication Type
Journal Articles | 24 |
Reports - Research | 22 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Secondary Education | 9 |
Elementary Secondary Education | 4 |
Grade 4 | 3 |
Elementary Education | 2 |
Grade 8 | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Grade 3 | 1 |
Audience
Location
Germany | 4 |
Sweden | 3 |
Australia | 2 |
Italy | 2 |
Spain | 2 |
Austria | 1 |
Azerbaijan | 1 |
Canada | 1 |
Croatia | 1 |
Czech Republic | 1 |
Finland | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 5 |
Trends in International… | 5 |
Progress in International… | 4 |
Program for the International… | 2 |
National Assessment Program… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
David Rutkowski; Leslie Rutkowski; Greg Thompson; Yusuf Canbolat – Large-scale Assessments in Education, 2024
This paper scrutinizes the increasing trend of using international large-scale assessment (ILSA) data for causal inferences in educational research, arguing that such inferences are often tenuous. We explore the complexities of causality within ILSAs, highlighting the methodological constraints that challenge the validity of causal claims derived…
Descriptors: International Assessment, Data Use, Causal Models, Educational Research
Jindra, Christoph; Sachse, Karoline A.; Hecht, Martin – Large-scale Assessments in Education, 2022
Reading and math proficiency are assumed to be crucial for the development of other academic skills. Further, different studies found reading and math development to be related. We contribute to the literature by looking at the relationship between reading and math using continuous time models. In contrast to previous studies, this allows us to…
Descriptors: Reading Achievement, Mathematics Achievement, Secondary School Students, Models
Jinnie Shin; Bowen Wang; Wallace N. Pinto Junior; Mark J. Gierl – Large-scale Assessments in Education, 2024
The benefits of incorporating process information in a large-scale assessment with the complex micro-level evidence from the examinees (i.e., process log data) are well documented in the research across large-scale assessments and learning analytics. This study introduces a deep-learning-based approach to predictive modeling of the examinee's…
Descriptors: Prediction, Models, Problem Solving, Performance
Scharl, Anna; Zink, Eva – Large-scale Assessments in Education, 2022
Educational large-scale assessments (LSAs) often provide plausible values for the administered competence tests to facilitate the estimation of population effects. This requires the specification of a background model that is appropriate for the specific research question. Because the "German National Educational Panel Study" (NEPS) is…
Descriptors: National Competency Tests, Foreign Countries, Programming Languages, Longitudinal Studies
Jaime León; Fernando Martínez-Abad – Large-scale Assessments in Education, 2025
Background: Grade retention is an educational aspect that concerns teachers, families, and experts. It implies an economic cost for families, as well as a personal cost for the student, who is forced to study one more year. The objective of the study was to evaluate the effect of course repetition on math, science and reading competencies, and…
Descriptors: Grade Repetition, Academic Achievement, Scores, Foreign Countries
Sciffer, Michael G.; Perry, Laura B.; McConney, Andrew – Large-scale Assessments in Education, 2022
This study examines the effect of school socioeconomic composition on student achievement growth in Australian schooling, and its relationship with academic composition utilising the National Assessment Program--Literacy and Numeracy (NAPLAN) dataset. Previous research has found that school composition predicts a range of schooling outcomes. A…
Descriptors: Socioeconomic Influences, Foreign Countries, Error of Measurement, Literacy
Gurkan, Gulsah; Benjamini, Yoav; Braun, Henry – Large-scale Assessments in Education, 2021
Employing nested sequences of models is a common practice when exploring the extent to which one set of variables mediates the impact of another set. Such an analysis in the context of logistic regression models confronts two challenges: (1) direct comparisons of coefficients across models are generally biased due to the changes in scale that…
Descriptors: Statistical Inference, Regression (Statistics), Adults, Models
Shin, Hyo Jeong; Jewsbury, Paul A.; van Rijn, Peter W. – Large-scale Assessments in Education, 2022
The present paper investigates and examines the conditional dependencies between cognitive responses (RA; Response Accuracy) and process data, in particular, response times (RT) in large-scale educational assessments. Using two prominent large-scale assessments, NAEP and PISA, we examined the RA-RT conditional dependencies within each item in the…
Descriptors: Cognitive Processes, Reaction Time, Educational Assessment, Achievement Tests
Matta, Tyler H.; Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Large-scale Assessments in Education, 2018
This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that…
Descriptors: Measurement, Data, Simulation, Item Response Theory
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Lohmann, Julian F.; Zitzmann, Steffen; Voelkle, Manuel C.; Hecht, Martin – Large-scale Assessments in Education, 2022
One major challenge of longitudinal data analysis is to find an appropriate statistical model that corresponds to the theory of change and the research questions at hand. In the present article, we argue that "continuous-time models" are well suited to study the continuously developing constructs of primary interest in the education…
Descriptors: Longitudinal Studies, Structural Equation Models, Time, Achievement Tests
Wang, Ze – Large-scale Assessments in Education, 2022
In educational and psychological research, it is common to use latent factors to represent constructs and then to examine covariate effects on these latent factors. Using empirical data, this study applied three approaches to covariate effects on latent factors: the multiple-indicator multiple-cause (MIMIC) approach, multiple group confirmatory…
Descriptors: Comparative Analysis, Evaluation Methods, Grade 8, Mathematics Achievement
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Contini, Dalit; Cugnata, Federica – Large-scale Assessments in Education, 2020
The development of international surveys on children's learning like PISA, PIRLS and TIMSS--delivering comparable achievement measures across educational systems--has revealed large cross-country variability in average performance and in the degree of inequality across social groups. A key question is whether and how institutional differences…
Descriptors: International Assessment, Achievement Tests, Scores, Family Characteristics
List, Marit K.; Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Nagy, Gabriel – Large-scale Assessments in Education, 2017
Background: In low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to…
Descriptors: Evaluation Methods, Student Evaluation, Item Response Theory, Mathematics Tests
Previous Page | Next Page »
Pages: 1 | 2