Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Large-scale Assessments in… | 15 |
Author
Lüdtke, Oliver | 3 |
Robitzsch, Alexander | 2 |
Benjamini, Yoav | 1 |
Bos, Wilfried | 1 |
Bouhlila, Donia Smaali | 1 |
Bowen Wang | 1 |
Braun, Henry | 1 |
Contini, Dalit | 1 |
Cugnata, Federica | 1 |
Daniel M. Bolt | 1 |
Goldhammer, Frank | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 14 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 5 |
Elementary Secondary Education | 4 |
Elementary Education | 2 |
Grade 4 | 2 |
Intermediate Grades | 2 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Germany | 3 |
Australia | 1 |
Austria | 1 |
Azerbaijan | 1 |
Canada | 1 |
Croatia | 1 |
Czech Republic | 1 |
Finland | 1 |
Georgia Republic | 1 |
Honduras | 1 |
Hungary | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 4 |
Program for International… | 3 |
Program for the International… | 2 |
Progress in International… | 2 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Jindra, Christoph; Sachse, Karoline A.; Hecht, Martin – Large-scale Assessments in Education, 2022
Reading and math proficiency are assumed to be crucial for the development of other academic skills. Further, different studies found reading and math development to be related. We contribute to the literature by looking at the relationship between reading and math using continuous time models. In contrast to previous studies, this allows us to…
Descriptors: Reading Achievement, Mathematics Achievement, Secondary School Students, Models
Jinnie Shin; Bowen Wang; Wallace N. Pinto Junior; Mark J. Gierl – Large-scale Assessments in Education, 2024
The benefits of incorporating process information in a large-scale assessment with the complex micro-level evidence from the examinees (i.e., process log data) are well documented in the research across large-scale assessments and learning analytics. This study introduces a deep-learning-based approach to predictive modeling of the examinee's…
Descriptors: Prediction, Models, Problem Solving, Performance
Scharl, Anna; Zink, Eva – Large-scale Assessments in Education, 2022
Educational large-scale assessments (LSAs) often provide plausible values for the administered competence tests to facilitate the estimation of population effects. This requires the specification of a background model that is appropriate for the specific research question. Because the "German National Educational Panel Study" (NEPS) is…
Descriptors: National Competency Tests, Foreign Countries, Programming Languages, Longitudinal Studies
Gurkan, Gulsah; Benjamini, Yoav; Braun, Henry – Large-scale Assessments in Education, 2021
Employing nested sequences of models is a common practice when exploring the extent to which one set of variables mediates the impact of another set. Such an analysis in the context of logistic regression models confronts two challenges: (1) direct comparisons of coefficients across models are generally biased due to the changes in scale that…
Descriptors: Statistical Inference, Regression (Statistics), Adults, Models
Shin, Hyo Jeong; Jewsbury, Paul A.; van Rijn, Peter W. – Large-scale Assessments in Education, 2022
The present paper investigates and examines the conditional dependencies between cognitive responses (RA; Response Accuracy) and process data, in particular, response times (RT) in large-scale educational assessments. Using two prominent large-scale assessments, NAEP and PISA, we examined the RA-RT conditional dependencies within each item in the…
Descriptors: Cognitive Processes, Reaction Time, Educational Assessment, Achievement Tests
Matta, Tyler H.; Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Large-scale Assessments in Education, 2018
This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that…
Descriptors: Measurement, Data, Simulation, Item Response Theory
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Contini, Dalit; Cugnata, Federica – Large-scale Assessments in Education, 2020
The development of international surveys on children's learning like PISA, PIRLS and TIMSS--delivering comparable achievement measures across educational systems--has revealed large cross-country variability in average performance and in the degree of inequality across social groups. A key question is whether and how institutional differences…
Descriptors: International Assessment, Achievement Tests, Scores, Family Characteristics
List, Marit K.; Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Nagy, Gabriel – Large-scale Assessments in Education, 2017
Background: In low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to…
Descriptors: Evaluation Methods, Student Evaluation, Item Response Theory, Mathematics Tests
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver – Large-scale Assessments in Education, 2017
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to…
Descriptors: Student Evaluation, International Assessment, Adults, Competence
Wendt, Heike; Kasper, Daniel; Trendtel, Matthias – Large-scale Assessments in Education, 2017
Background: Large-scale cross-national studies designed to measure student achievement use different social, cultural, economic and other background variables to explain observed differences in that achievement. Prior to their inclusion into a prediction model, these variables are commonly scaled into latent background indices. To allow…
Descriptors: Measurement, Achievement Tests, Cultural Differences, Socioeconomic Influences
Strietholt, Rolf; Rosén, Monica; Bos, Wilfried – Large-scale Assessments in Education, 2013
Background: Since the early days of international large-scale assessments, an overarching aim has been to use the world as an educational laboratory so countries can learn from one another and develop educational systems further. Cross-sectional comparisons across countries as well as trend studies derive from the assumption that there are…
Descriptors: Measurement, International Assessment, Foreign Countries, Sampling
Bouhlila, Donia Smaali; Sellaouti, Fethi – Large-scale Assessments in Education, 2013
In this paper, we document a study that involved applying a multiple imputation technique with chained equations to data drawn from the 2007 iteration of the TIMSS database. More precisely, we imputed missing variables contained in the student background datafile for Tunisia (one of the TIMSS 2007 participating countries), by using Van Buuren,…
Descriptors: Databases, Student Characteristics, Error of Measurement, Intervals