Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 30 |
Descriptor
Error Patterns | 30 |
Monte Carlo Methods | 30 |
Simulation | 19 |
Evaluation Methods | 14 |
Computation | 9 |
Regression (Statistics) | 8 |
Sample Size | 8 |
Models | 7 |
Statistical Analysis | 7 |
Correlation | 5 |
Effect Size | 5 |
More ▼ |
Source
Author
Harring, Jeffrey R. | 2 |
Onghena, Patrick | 2 |
Pituch, Keenan A. | 2 |
Weiss, Brandi A. | 2 |
Ato, Manuel | 1 |
Basman, Munevver | 1 |
Bishara, Anthony J. | 1 |
Bolt, Daniel M. | 1 |
Bulte, Isis | 1 |
Choi, Seung W. | 1 |
Deke, John | 1 |
More ▼ |
Publication Type
Journal Articles | 26 |
Reports - Research | 20 |
Reports - Evaluative | 6 |
Dissertations/Theses -… | 2 |
Reports - Descriptive | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Higher Education | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Huang, Francis L. – Journal of Experimental Education, 2022
Experiments in psychology or education often use logistic regression models (LRMs) when analyzing binary outcomes. However, a challenge with LRMs is that results are generally difficult to understand. We present alternatives to LRMs in the analysis of experiments and discuss the linear probability model, the log-binomial model, and the modified…
Descriptors: Regression (Statistics), Monte Carlo Methods, Probability, Error Patterns
Joo, Seang-Hwane; Lee, Philseok – Journal of Educational Measurement, 2022
Abstract This study proposes a new Bayesian differential item functioning (DIF) detection method using posterior predictive model checking (PPMC). Item fit measures including infit, outfit, observed score distribution (OSD), and Q1 were considered as discrepancy statistics for the PPMC DIF methods. The performance of the PPMC DIF method was…
Descriptors: Test Items, Bayesian Statistics, Monte Carlo Methods, Prediction
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Sinharay, Sandip – Journal of Educational Measurement, 2016
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Descriptors: Sampling, Research Methodology, Error Patterns, Monte Carlo Methods
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick – Journal of Experimental Education, 2017
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Descriptors: Monte Carlo Methods, Simulation, Intervention, Replication (Evaluation)
Leth-Steensen, Craig; Gallitto, Elena – Educational and Psychological Measurement, 2016
A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…
Descriptors: Mediation Theory, Structural Equation Models, Monte Carlo Methods, Simulation
Harring, Jeffrey R.; Weiss, Brandi A.; Li, Ming – Educational and Psychological Measurement, 2015
Several studies have stressed the importance of simultaneously estimating interaction and quadratic effects in multiple regression analyses, even if theory only suggests an interaction effect should be present. Specifically, past studies suggested that failing to simultaneously include quadratic effects when testing for interaction effects could…
Descriptors: Structural Equation Models, Statistical Analysis, Monte Carlo Methods, Computation
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Bishara, Anthony J.; Hittner, James B. – Educational and Psychological Measurement, 2015
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Descriptors: Research Methodology, Monte Carlo Methods, Correlation, Simulation
Williams, Ryan T. – ProQuest LLC, 2012
Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…
Descriptors: Multiple Regression Analysis, Meta Analysis, Evaluation Methods, Computation
Kim, Eun Sook; Kwok, Oi-man; Yoon, Myeongsun – Structural Equation Modeling: A Multidisciplinary Journal, 2012
Testing factorial invariance has recently gained more attention in different social science disciplines. Nevertheless, when examining factorial invariance, it is generally assumed that the observations are independent of each other, which might not be always true. In this study, we examined the impact of testing factorial invariance in multilevel…
Descriptors: Monte Carlo Methods, Testing, Social Science Research, Factor Structure
Tay, Louis; Drasgow, Fritz – Educational and Psychological Measurement, 2012
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Descriptors: Test Length, Monte Carlo Methods, Goodness of Fit, Item Response Theory
Previous Page | Next Page »
Pages: 1 | 2