Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 19 |
Descriptor
Source
Author
Alamri, Abeer | 1 |
Allan, Darien, Ed. | 1 |
Bernard P. Veldkamp | 1 |
Camilli, Gregory | 1 |
Cheng, Ying | 1 |
Chengyu Cui | 1 |
Cheong, Yuk Fai | 1 |
Chun Wang | 1 |
Chun, Seokjoon | 1 |
Desa, Deana | 1 |
Fox, Jean-Paul | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 15 |
Dissertations/Theses -… | 2 |
Collected Works - Proceedings | 1 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 9 |
Secondary Education | 5 |
Elementary Education | 4 |
Grade 4 | 3 |
Grade 8 | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Grade 12 | 1 |
High Schools | 1 |
Higher Education | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 19 |
Program for International… | 3 |
Big Five Inventory | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Wang, Yan; Kim, Eunsook; Joo, Seang-Hwane; Chun, Seokjoon; Alamri, Abeer; Lee, Philseok; Stark, Stephen – Journal of Experimental Education, 2022
Multilevel latent class analysis (MLCA) has been increasingly used to investigate unobserved population heterogeneity while taking into account data dependency. Nonparametric MLCA has gained much popularity due to the advantage of classifying both individuals and clusters into latent classes. This study demonstrated the need to relax the…
Descriptors: Nonparametric Statistics, Hierarchical Linear Modeling, Monte Carlo Methods, Simulation
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
Large-scale assessments (LSAs) use Mislevy's "plausible value" (PV) approach to relate student proficiency to noncognitive variables administered in a background questionnaire. This method requires background variables to be completely observed, a requirement that is seldom fulfilled. In this article, we evaluate and compare the…
Descriptors: Data Analysis, Error of Measurement, Research Problems, Statistical Inference
Cheng, Ying; Shao, Can; Lathrop, Quinn N. – Educational and Psychological Measurement, 2016
Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable…
Descriptors: Test Bias, Models, Simulation, Sample Size
Camilli, Gregory; Fox, Jean-Paul – Journal of Educational and Behavioral Statistics, 2015
An aggregation strategy is proposed to potentially address practical limitation related to computing resources for two-level multidimensional item response theory (MIRT) models with large data sets. The aggregate model is derived by integration of the normal ogive model, and an adaptation of the stochastic approximation expectation maximization…
Descriptors: Factor Analysis, Item Response Theory, Grade 4, Simulation
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Cheong, Yuk Fai; Kamata, Akihito – Applied Measurement in Education, 2013
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Descriptors: Test Bias, Hierarchical Linear Modeling, Comparative Analysis, Test Items
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Si, Yajuan; Reiter, Jerome P. – Journal of Educational and Behavioral Statistics, 2013
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
Descriptors: Nonparametric Statistics, Bayesian Statistics, Measurement, Evaluation Methods
Pokropek, Artur – Sociological Methods & Research, 2015
This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…
Descriptors: Error of Measurement, Hierarchical Linear Modeling, Simulation, Evaluation Methods
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Previous Page | Next Page »
Pages: 1 | 2