NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Kelcey, Benjamin – Journal of Experimental Education, 2022
Optimal design of multisite randomized trials leverages sampling costs to optimize sampling ratios and ultimately identify more efficient and powerful designs. Past implementations of the optimal design framework have assumed that costs of sampling units are equal across treatment conditions. In this study, we developed a more flexible optimal…
Descriptors: Randomized Controlled Trials, Sampling, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta – Journal of Experimental Education, 2020
Changing criterion designs (CCD) are single-case experimental designs that entail a step-by-step approximation of the final level desired for a target behavior. Following a recent review on the desirable methodological features of CCDs, the current text focuses on an analytical challenge: the definition of an objective rule for assessing the…
Descriptors: Research Design, Research Methodology, Data Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy; Hedges, Larry V.; Hedberg, E. C. – Journal of Experimental Education, 2022
Many experimental designs in educational and behavioral research involve at least one level of clustering. Clustering affects the precision of estimators and its impact on statistics in cross-sectional studies is well known. Clustering also occurs in longitudinal designs where students that are initially grouped may be regrouped in the following…
Descriptors: Educational Research, Multivariate Analysis, Longitudinal Studies, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2011
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Descriptors: Sample Size, Monte Carlo Methods, Statistical Analysis, Heterogeneous Grouping
Peer reviewed Peer reviewed
Klockars, Alan J.; Potter, Nina Salcedo; Beretvas, S. Natasha – Journal of Experimental Education, 1999
Compared the power of analysis of covariance (ANCOVA) and two types of randomized block designs as a function of the correlation between the concomitant variable and the outcome measure, the number of groups, the number of participants, and nominal power. Discusses advantages of ANCOVA. (Author/SLD)
Descriptors: Analysis of Covariance, Correlation, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Zhongmiao; Thompson, Bruce – Journal of Experimental Education, 2007
In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…
Descriptors: Effect Size, Correlation, Mathematical Formulas, Monte Carlo Methods
Peer reviewed Peer reviewed
Levy, Kenneth J. – Journal of Experimental Education, 1978
The purpose of this paper is to demonstrate how many more subjects are required to achieve equal power when testing certain hypotheses concerning proportions if the randomized response technique is employed for estimating a population proportion instead of the conventional technique. (Author)
Descriptors: Experimental Groups, Hypothesis Testing, Research Design, Response Style (Tests)
Peer reviewed Peer reviewed
Rheinheimer, David C.; Penfield, Douglas A. – Journal of Experimental Education, 2001
Studied, through Monte Carlo simulation, the conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power and evaluated some alternative tests. Discusses differences in ANCOVA robustness for balanced and unbalanced designs. (SLD)
Descriptors: Analysis of Covariance, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Klockars, Alan J.; Beretvas, S. Natasha – Journal of Experimental Education, 2001
Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)
Descriptors: Analysis of Covariance, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Sawilowsky, Shlomo; And Others – Journal of Experimental Education, 1994
A Monte Carlo study considers the use of meta analysis with the Solomon four-group design. Experiment-wise Type I error properties and the relative power properties of Stouffer's Z in the Solomon four-group design are explored. Obstacles to conducting meta analysis in the Solomon design are discussed. (SLD)
Descriptors: Meta Analysis, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Nelson, Jack K.; Coorough, Calleen – Journal of Experimental Education, 1994
PhD and EdD dissertations were compared for research design, statistics, target populations, significance of results, age of subjects, and other characteristics. Analysis of 1,007 PhD and 960 EdD dissertations found PhD dissertations had more multivariate statistics and wider generalizability. EdD dissertations were more prevalent in educational…
Descriptors: Comparative Analysis, Content Analysis, Doctoral Degrees, Doctoral Dissertations
Peer reviewed Peer reviewed
Allison, David B.; And Others – Journal of Experimental Education, 1992
Effects of response guided experimentation in applied behavior analysis on Type I error rates are explored. Data from T. A. Matyas and K. M. Greenwood (1990) suggest that, when visual inspection is combined with response guided experimentation, Type I error rates can be as high as 25%. (SLD)
Descriptors: Behavioral Science Research, Error of Measurement, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Hopkins, Kenneth D.; Gullickson, Arlen R. – Journal of Experimental Education, 1992
A metanalysis involving 62 studies compared the response rate to mailed surveys with and without a monetary gratuity. The average response rate increased 19% when a gratuity was enclosed. Other findings that substantiate that the external validity of surveys can be increased by gratuities are discussed. (SLD)
Descriptors: Mail Surveys, Meta Analysis, Questionnaires, Research Design
Peer reviewed Peer reviewed
Marsh, Herbert W. – Journal of Experimental Education, 1998
Eight variations of a general matching design, matching program participants and a control group, were studied through simulation, for their effectiveness in evaluating programs for the gifted and talented. A regression-discontinuity design provided the best approach, with unbiased estimates of program effects. (SLD)
Descriptors: Control Groups, Gifted, Matched Groups, Program Evaluation
Peer reviewed Peer reviewed
Ferron, John; Onghena, Patrick – Journal of Experimental Education, 1996
Monte Carlo methods were used to estimate the power of randomization tests used with single-case designs involving random assignment of treatments to phases. Simulations of two treatments and six phases showed an adequate level of power when effect sizes were large, phase lengths exceeded five, and autocorrelation was not negative. (SLD)
Descriptors: Case Studies, Correlation, Educational Research, Effect Size
Previous Page | Next Page ยป
Pages: 1  |  2