Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Applied Measurement in… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Journal of Educational and… | 1 |
Large-scale Assessments in… | 1 |
Practical Assessment,… | 1 |
Author
Cheong, Yuk Fai | 1 |
Desa, Deana | 1 |
Haag, Nicole | 1 |
Hastedt, Dirk | 1 |
Jin, Ying | 1 |
Kamata, Akihito | 1 |
Kang, Minsoo | 1 |
Monroe, Scott | 1 |
Roppelt, Alexander | 1 |
Sachse, Karoline A. | 1 |
Sen, Sedat | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Education Level
Elementary Secondary Education | 3 |
Grade 4 | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 12 | 1 |
High Schools | 1 |
Intermediate Grades | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 6 |
National Assessment of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Cheong, Yuk Fai; Kamata, Akihito – Applied Measurement in Education, 2013
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Descriptors: Test Bias, Hierarchical Linear Modeling, Comparative Analysis, Test Items
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation