NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta – Journal of Experimental Education, 2020
Changing criterion designs (CCD) are single-case experimental designs that entail a step-by-step approximation of the final level desired for a target behavior. Following a recent review on the desirable methodological features of CCDs, the current text focuses on an analytical challenge: the definition of an objective rule for assessing the…
Descriptors: Research Design, Research Methodology, Data Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Bulus, Metin; Dong, Nianbo – Journal of Experimental Education, 2021
Sample size determination in multilevel randomized trials (MRTs) and multilevel regression discontinuity designs (MRDDs) can be complicated due to multilevel structure, monetary restrictions, differing marginal costs per treatment and control units, and range restrictions in sample size at one or more levels. These issues have sparked a set of…
Descriptors: Sampling, Research Methodology, Costs, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca – Journal of Experimental Education, 2014
The Institute of Education Sciences has funded more than 100 experiments to evaluate educational interventions in an effort to generate scientific evidence of program effectiveness on which to base education policy and practice. In general, these studies are designed with the goal of having adequate statistical power to detect the average…
Descriptors: Intervention, Educational Research, Research Methodology, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2009
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Descriptors: Sample Size, Research Methodology, Evaluation Methods, Simulation
Peer reviewed Peer reviewed
Daniel, Larry G. – Journal of Experimental Education, 1997
Gives an overview of three of the myths that F. N. Kerlinger (1959, 1960) identified as pervading educational research. Explores the myths of methods, practicality, and statistics, and analyzes the degree to which they have been overcome or still exist. (SLD)
Descriptors: Educational Research, Mythology, Research Design, Research Methodology
Peer reviewed Peer reviewed
Pohl, Norval Frederick – Journal of Experimental Education, 1974
The purpose of this study was to compare the relative classificatory ability of the Linear Discriminant Function (LDF) and the Bayesian Taxonomic Procedure (BTP) when these techniques are applied to multivariate normal and nonnormal data with differing degrees of overlap in the distributions of the predictor variables. (Editor)
Descriptors: Bayesian Statistics, Diagrams, Predictor Variables, Research Design
Peer reviewed Peer reviewed
Sawilowsky, Shlomo; And Others – Journal of Experimental Education, 1994
A Monte Carlo study considers the use of meta analysis with the Solomon four-group design. Experiment-wise Type I error properties and the relative power properties of Stouffer's Z in the Solomon four-group design are explored. Obstacles to conducting meta analysis in the Solomon design are discussed. (SLD)
Descriptors: Meta Analysis, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Allison, David B.; And Others – Journal of Experimental Education, 1992
Effects of response guided experimentation in applied behavior analysis on Type I error rates are explored. Data from T. A. Matyas and K. M. Greenwood (1990) suggest that, when visual inspection is combined with response guided experimentation, Type I error rates can be as high as 25%. (SLD)
Descriptors: Behavioral Science Research, Error of Measurement, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Hopkins, Kenneth D.; Gullickson, Arlen R. – Journal of Experimental Education, 1992
A metanalysis involving 62 studies compared the response rate to mailed surveys with and without a monetary gratuity. The average response rate increased 19% when a gratuity was enclosed. Other findings that substantiate that the external validity of surveys can be increased by gratuities are discussed. (SLD)
Descriptors: Mail Surveys, Meta Analysis, Questionnaires, Research Design
Peer reviewed Peer reviewed
Ferron, John; Onghena, Patrick – Journal of Experimental Education, 1996
Monte Carlo methods were used to estimate the power of randomization tests used with single-case designs involving random assignment of treatments to phases. Simulations of two treatments and six phases showed an adequate level of power when effect sizes were large, phase lengths exceeded five, and autocorrelation was not negative. (SLD)
Descriptors: Case Studies, Correlation, Educational Research, Effect Size
Peer reviewed Peer reviewed
Shaver, James P. – Journal of Experimental Education, 1993
Reviews the role of statistical significance testing, and argues that dominance of such testing is dysfunctional because significance tests do not provide the information that many researchers assume they do. Possible reasons for the persistence of statistical significance testing are discussed briefly, and ways to moderate negative effects are…
Descriptors: Educational Practices, Educational Research, Elementary Secondary Education, Higher Education
Peer reviewed Peer reviewed
Baldwin, Lee; And Others – Journal of Experimental Education, 1984
Within-class regression is a method, developed in this paper, of comparing a large number of nonequivalent groups. This study indicated that within-class regression was a less biased method of data analysis and will yield more accurate estimates of treatment effects than analysis of covariance. (PN)
Descriptors: Analysis of Covariance, Data Analysis, Educational Research, Evaluation Methods
Peer reviewed Peer reviewed
Pohl, Norval F. – Journal of Experimental Education, 1982
The response-shift phenomenon is demonstrated in a typical classroom setting. Retrospective pre-ratings in self-report instruments are shown to yield more accurate estimates of pre-instruction knowledge than simple pre-ratings. (Author/CM)
Descriptors: Behavior Change, Classroom Environment, Higher Education, Knowledge Level
Peer reviewed Peer reviewed
Borg, Walter R. – Journal of Experimental Education, 1987
This article describes some insights of the author based on 20 years experience in using the educational research and development process. Problems and strategies related to planning, developing a prototype, and evaluating educational programs and instructional materials are discussed. (Author/JAZ)
Descriptors: Educational Objectives, Educational Planning, Elementary Secondary Education, Evaluation Methods