Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 10 |
Descriptor
Error Patterns | 10 |
Monte Carlo Methods | 10 |
Simulation | 5 |
Models | 4 |
Item Response Theory | 3 |
Regression (Statistics) | 3 |
Sample Size | 3 |
Sampling | 3 |
Statistical Inference | 3 |
Test Items | 3 |
Computation | 2 |
More ▼ |
Source
Journal of Experimental… | 3 |
Educational and Psychological… | 2 |
Journal of Educational… | 2 |
Annenberg Institute for… | 1 |
International Journal of… | 1 |
National Center for Education… | 1 |
Author
Basman, Munevver | 1 |
Choi, Seung W. | 1 |
Deke, John | 1 |
Gallitto, Elena | 1 |
Han, Suhwa | 1 |
Heyvaert, Mieke | 1 |
Huang, Francis L. | 1 |
James S. Kim | 1 |
Joo, Seang-Hwane | 1 |
Joshua B. Gilbert | 1 |
Kautz, Tim | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 8 |
Reports - Evaluative | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Huang, Francis L. – Journal of Experimental Education, 2022
Experiments in psychology or education often use logistic regression models (LRMs) when analyzing binary outcomes. However, a challenge with LRMs is that results are generally difficult to understand. We present alternatives to LRMs in the analysis of experiments and discuss the linear probability model, the log-binomial model, and the modified…
Descriptors: Regression (Statistics), Monte Carlo Methods, Probability, Error Patterns
Joo, Seang-Hwane; Lee, Philseok – Journal of Educational Measurement, 2022
Abstract This study proposes a new Bayesian differential item functioning (DIF) detection method using posterior predictive model checking (PPMC). Item fit measures including infit, outfit, observed score distribution (OSD), and Q1 were considered as discrepancy statistics for the PPMC DIF methods. The performance of the PPMC DIF method was…
Descriptors: Test Items, Bayesian Statistics, Monte Carlo Methods, Prediction
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Sinharay, Sandip – Journal of Educational Measurement, 2016
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Descriptors: Sampling, Research Methodology, Error Patterns, Monte Carlo Methods
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick – Journal of Experimental Education, 2017
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Descriptors: Monte Carlo Methods, Simulation, Intervention, Replication (Evaluation)
Leth-Steensen, Craig; Gallitto, Elena – Educational and Psychological Measurement, 2016
A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…
Descriptors: Mediation Theory, Structural Equation Models, Monte Carlo Methods, Simulation
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables