Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Models | 3 |
Simulation | 3 |
Achievement Tests | 2 |
Foreign Countries | 2 |
International Assessment | 2 |
Item Response Theory | 2 |
Test Items | 2 |
Alternative Assessment | 1 |
Comparative Analysis | 1 |
Computer Software | 1 |
Data | 1 |
More ▼ |
Source
Large-scale Assessments in… | 3 |
Author
Jin, Ying | 1 |
Kang, Minsoo | 1 |
Liaw, Yuan-Ling | 1 |
Lüdtke, Oliver | 1 |
Matta, Tyler H. | 1 |
Robitzsch, Alexander | 1 |
Rutkowski, David | 1 |
Rutkowski, Leslie | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Matta, Tyler H.; Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Large-scale Assessments in Education, 2018
This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that…
Descriptors: Measurement, Data, Simulation, Item Response Theory
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)