Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 11 |
Descriptor
Source
Author
Alamri, Abeer | 1 |
Bernard P. Veldkamp | 1 |
Camilli, Gregory | 1 |
Chengyu Cui | 1 |
Chun Wang | 1 |
Chun, Seokjoon | 1 |
Desa, Deana | 1 |
Fox, Jean-Paul | 1 |
Giada Spaccapanico Proietti | 1 |
Gongjun Xu | 1 |
Haag, Nicole | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 9 |
Dissertations/Theses -… | 1 |
Education Level
Elementary Secondary Education | 6 |
Elementary Education | 3 |
Grade 4 | 3 |
Secondary Education | 3 |
Intermediate Grades | 2 |
Grade 12 | 1 |
Grade 8 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 11 |
Program for International… | 2 |
Big Five Inventory | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Wang, Yan; Kim, Eunsook; Joo, Seang-Hwane; Chun, Seokjoon; Alamri, Abeer; Lee, Philseok; Stark, Stephen – Journal of Experimental Education, 2022
Multilevel latent class analysis (MLCA) has been increasingly used to investigate unobserved population heterogeneity while taking into account data dependency. Nonparametric MLCA has gained much popularity due to the advantage of classifying both individuals and clusters into latent classes. This study demonstrated the need to relax the…
Descriptors: Nonparametric Statistics, Hierarchical Linear Modeling, Monte Carlo Methods, Simulation
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Camilli, Gregory; Fox, Jean-Paul – Journal of Educational and Behavioral Statistics, 2015
An aggregation strategy is proposed to potentially address practical limitation related to computing resources for two-level multidimensional item response theory (MIRT) models with large data sets. The aggregate model is derived by integration of the normal ogive model, and an adaptation of the stochastic approximation expectation maximization…
Descriptors: Factor Analysis, Item Response Theory, Grade 4, Simulation
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Svetina, Dubravka; Rutkowski, Leslie – Large-scale Assessments in Education, 2014
Background: When studying student performance across different countries or cultures, an important aspect for comparisons is that of score comparability. In other words, it is imperative that the latent variable (i.e., construct of interest) is understood and measured equivalently across all participating groups or countries, if our inferences…
Descriptors: Test Items, Item Response Theory, Item Analysis, Regression (Statistics)
Lu, Yi – ProQuest LLC, 2012
Cross-national comparisons of responses to survey items are often affected by response style, particularly extreme response style (ERS). ERS varies across cultures, and has the potential to bias inferences in cross-national comparisons. For example, in both PISA and TIMSS assessments, it has been documented that when examined within countries,…
Descriptors: Item Response Theory, Attitude Measures, Response Style (Tests), Cultural Differences