Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 11 |
Descriptor
Source
Author
Publication Type
Journal Articles | 11 |
Reports - Research | 7 |
Reports - Evaluative | 3 |
Information Analyses | 2 |
Opinion Papers | 2 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Reports - General | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 1 | 1 |
Primary Education | 1 |
Audience
Researchers | 1 |
Location
Canada (Montreal) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Jane E. Miller – Numeracy, 2023
Students often believe that statistical significance is the only determinant of whether a quantitative result is "important." In this paper, I review traditional null hypothesis statistical testing to identify what questions inferential statistics can and cannot answer, including statistical significance, effect size and direction,…
Descriptors: Statistical Significance, Holistic Approach, Statistical Inference, Effect Size
VanHoudnos, Nathan M.; Greenhouse, Joel B. – Journal of Educational and Behavioral Statistics, 2016
When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Error of Measurement, Scaling
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric – Journal of Experimental Education, 2015
This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…
Descriptors: Effect Size, Measurement Techniques, Statistical Analysis, Research Design
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J. – New Directions for Evaluation, 2013
The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…
Descriptors: Sampling, Mixed Methods Research, Evaluators, Q Methodology
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2011
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Descriptors: Computation, Identification, Educational Research, Research Design
Bonett, Douglas G. – Psychological Methods, 2009
L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…
Descriptors: Intervals, Sample Size, Effect Size, Statistical Inference
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Byrd, Jimmy K. – Educational Administration Quarterly, 2007
Purpose: The purpose of this study was to review research published by Educational Administration Quarterly (EAQ) during the past 10 years to determine if confidence intervals and effect sizes were being reported as recommended by the American Psychological Association (APA) Publication Manual. Research Design: The author examined 49 volumes of…
Descriptors: Research Design, Intervals, Statistical Inference, Effect Size

Murray, Leigh W.; Dosser, David A., Jr. – Journal of Counseling Psychology, 1987
The use of measures of magnitude of effect has been advocated as a way to go beyond statistical tests of significance and to identify effects of a practical size. They have been used in meta-analysis to combine results of different studies. Describes problems associated with measures of magnitude of effect (particularly study size) and…
Descriptors: Effect Size, Meta Analysis, Research Design, Research Methodology

Knapp, Thomas R. – Mid-Western Educational Researcher, 1999
Presents an opinion on the appropriate use of significance tests, especially in the context of regression analysis, the most commonly encountered statistical technique in education and related disciplines. Briefly discusses the appropriate use of power analysis. Contains 47 references. (Author/SV)
Descriptors: Data Interpretation, Educational Research, Effect Size, Hypothesis Testing
Thompson, Bruce – 1987
This paper evaluates the logic underlying various criticisms of statistical significance testing and makes specific recommendations for scientific and editorial practice that might better increase the knowledge base. Reliance on the traditional hypothesis testing model has led to a major bias against nonsignificant results and to misinterpretation…
Descriptors: Analysis of Variance, Data Interpretation, Editors, Effect Size