Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Research Design | 18 |
Simulation | 18 |
Monte Carlo Methods | 9 |
Educational Research | 8 |
Statistical Analysis | 7 |
Sample Size | 6 |
Correlation | 5 |
Analysis of Covariance | 4 |
Effect Size | 4 |
Intervention | 4 |
Power (Statistics) | 4 |
More ▼ |
Source
Journal of Experimental… | 18 |
Author
Onghena, Patrick | 3 |
Beretvas, S. Natasha | 2 |
Moeyaert, Mariola | 2 |
Ugille, Maaike | 2 |
Van den Noortgate, Wim | 2 |
Baldwin, Lee | 1 |
Ben Kelcey | 1 |
Bulte, Isis | 1 |
Fayette Klaassen | 1 |
Ferron, John | 1 |
Ferron, John M. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 10 |
Reports - Evaluative | 6 |
Reports - Descriptive | 1 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Milica Miocevic; Fayette Klaassen; Mariola Moeyaert; Gemma G. M. Geuke – Journal of Experimental Education, 2025
Mediation analysis in Single Case Experimental Designs (SCEDs) evaluates intervention mechanisms for individuals. Despite recent methodological developments, no clear guidelines exist for maximizing power to detect the indirect effect in SCEDs. This study compares frequentist and Bayesian methods, determining (1) minimum required sample size to…
Descriptors: Research Design, Mediation Theory, Statistical Analysis, Simulation
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Kyle Cox; Ben Kelcey; Hannah Luce – Journal of Experimental Education, 2024
Comprehensive evaluation of treatment effects is aided by considerations for moderated effects. In educational research, the combination of natural hierarchical structures and prevalence of group-administered or shared facilitator treatments often produces three-level partially nested data structures. Literature details planning strategies for a…
Descriptors: Randomized Controlled Trials, Monte Carlo Methods, Hierarchical Linear Modeling, Educational Research
Finch, W. Holmes; Finch, Maria Hernández – Journal of Experimental Education, 2018
Single subject (SS) designs are popular in educational and psychological research. There exist several statistical techniques designed to analyze such data and to address the question of whether an intervention has the desired impact. Recently, researchers have suggested that generalized additive models (GAMs) might be useful for modeling…
Descriptors: Educational Research, Longitudinal Studies, Simulation, Models
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick – Journal of Experimental Education, 2017
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Descriptors: Monte Carlo Methods, Simulation, Intervention, Replication (Evaluation)
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2009
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Descriptors: Sample Size, Research Methodology, Evaluation Methods, Simulation

Rheinheimer, David C.; Penfield, Douglas A. – Journal of Experimental Education, 2001
Studied, through Monte Carlo simulation, the conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power and evaluated some alternative tests. Discusses differences in ANCOVA robustness for balanced and unbalanced designs. (SLD)
Descriptors: Analysis of Covariance, Monte Carlo Methods, Power (Statistics), Research Design

Klockars, Alan J.; Beretvas, S. Natasha – Journal of Experimental Education, 2001
Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)
Descriptors: Analysis of Covariance, Monte Carlo Methods, Power (Statistics), Research Design

Hsu, Tse-Chi; Sebatane, E. Molapi – Journal of Experimental Education, 1979
A Monte Carlo technique was used to investigate the effect of the differences in covariate means among treatment groups on the significance level and the power of the F-test of the analysis of covariance. (Author/GDC)
Descriptors: Analysis of Covariance, Correlation, Research Design, Research Problems

Sawilowsky, Shlomo; And Others – Journal of Experimental Education, 1994
A Monte Carlo study considers the use of meta analysis with the Solomon four-group design. Experiment-wise Type I error properties and the relative power properties of Stouffer's Z in the Solomon four-group design are explored. Obstacles to conducting meta analysis in the Solomon design are discussed. (SLD)
Descriptors: Meta Analysis, Monte Carlo Methods, Power (Statistics), Research Design
Wang, Zhongmiao; Thompson, Bruce – Journal of Experimental Education, 2007
In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…
Descriptors: Effect Size, Correlation, Mathematical Formulas, Monte Carlo Methods

Marsh, Herbert W. – Journal of Experimental Education, 1998
Eight variations of a general matching design, matching program participants and a control group, were studied through simulation, for their effectiveness in evaluating programs for the gifted and talented. A regression-discontinuity design provided the best approach, with unbiased estimates of program effects. (SLD)
Descriptors: Control Groups, Gifted, Matched Groups, Program Evaluation

Ferron, John; Onghena, Patrick – Journal of Experimental Education, 1996
Monte Carlo methods were used to estimate the power of randomization tests used with single-case designs involving random assignment of treatments to phases. Simulations of two treatments and six phases showed an adequate level of power when effect sizes were large, phase lengths exceeded five, and autocorrelation was not negative. (SLD)
Descriptors: Case Studies, Correlation, Educational Research, Effect Size
Previous Page | Next Page »
Pages: 1 | 2