Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Author
Publication Type
Education Level
Adult Education | 2 |
Middle Schools | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Mathematics Anxiety Rating… | 1 |
What Works Clearinghouse Rating
Barry, Adam E.; Szucs, Leigh E.; Reyes, Jovanni V.; Ji, Qian; Wilson, Kelly L.; Thompson, Bruce – Health Education & Behavior, 2016
Given the American Psychological Association's strong recommendation to always report effect sizes in research, scholars have a responsibility to provide complete information regarding their findings. The purposes of this study were to (a) determine the frequencies with which different effect sizes were reported in published, peer-reviewed…
Descriptors: Effect Size, Periodicals, Professional Associations, Journal Articles
Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce – Journal of Educational Research, 2010
The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…
Descriptors: Intervals, Rating Scales, Effect Size, Remedial Mathematics
Thompson, Bruce – Middle Grades Research Journal, 2009
The present article provides a primer on using effect sizes in research. A small heuristic data set is used in order to make the discussion concrete. Additionally, various admonitions for best practice in reporting and interpreting effect sizes are presented. Among these is the admonition to not use Cohen's benchmarks for "small," "medium," and…
Descriptors: Educational Research, Effect Size, Computation, Research Methodology
Harrison, Judith; Thompson, Bruce; Vannest, Kimberly J. – Review of Educational Research, 2009
This article reviews the literature on interventions targeting the academic performance of students with attention-deficit/hyperactivity disorder (ADHD) and does so within the context of the statistical significance testing controversy. Both the arguments for and against null hypothesis statistical significance tests are reviewed. Recent standards…
Descriptors: Educational Research, Academic Achievement, Statistical Significance, Effect Size
Zientek, Linda Reichwein; Thompson, Bruce – Educational Researcher, 2009
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
Descriptors: Effect Size, Correlation, Researchers, Multivariate Analysis
Thompson, Bruce – Psychology in the Schools, 2007
The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…
Descriptors: Intervals, Effect Size, Statistical Analysis, Statistical Significance
Thompson, Bruce – Counseling and Values, 2006
Effect sizes (e.g., Cohen's d, Glass's [delta], [[eta] [squared]], adjusted [R [squared]], [[omega] [squared]]) quantify the extent to which sample results diverge from the expectations specified in the null hypothesis. The present article addresses 5 related questions, First, is the advocacy for reporting and interpreting effect sizes part of the…
Descriptors: Effect Size, Statistical Significance, Role, Counseling
Thompson, Bruce – 1997
Given some consensus that statistical significance tests are broken, misused, or at least have somewhat limited utility, the focus of discussion within the field ought to move beyond additional bashing of statistical significance tests, and toward more constructive suggestions for improved practice. Five suggestions for improved practice are…
Descriptors: Effect Size, Research Methodology, Statistical Significance, Test Use
Kieffer, Kevin M.; Thompson, Bruce – 1999
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Descriptors: Educational Research, Sample Size, Statistical Significance, Test Interpretation

Snyder, Patricia A.; Thompson, Bruce – School Psychology Quarterly, 1998
Reviews some of the criticisms of contemporary practice regarding the use of statistical tests. Presents a brief overview of effect indices. Reviews related practices within seven volumes of "School Psychology Quarterly." Results show that contemporary authors continue to use and interpret statistical significance tests inappropriately. Explores…
Descriptors: Language, Scholarly Journals, School Psychology, Statistical Analysis
Thompson, Bruce; Kieffer, Kevin M. – Research in the Schools, 2000
Proposes and illustrates a new method by which "what if" analyses can be conducted using estimated true population effects. Use of these "what if" methods may prevent authors with large sample sizes from overinterpreting their small effects once they see that the small effects would no longer have been statistically significant with only a…
Descriptors: Effect Size, Research Reports, Sample Size, Statistical Significance
Thompson, Bruce – 1987
The use of planned, or "a priori," and unplanned, or "post hoc," comparisons to isolate differences among means in analysis of variance research is discussed. Planned comparisons typically involve weighting data by sets of "contrasts." Planned comparison offer more power against Type II errors. In addition, they force…
Descriptors: Analysis of Variance, Research Methodology, Statistical Analysis, Statistical Significance

Vacha-Haase, Tammi; Thompson, Bruce – Measurement and Evaluation in Counseling and Development, 1998
Responds to Biskin's comments (this issue) on the significance test controversy. Highlights areas of agreement (importance of replication evidence, importance of effect sizes) and disagreement (influence of sample size, evaluation of populations vs. samples, significance of Carver's article). Includes further recommendations for reporting research…
Descriptors: Data Interpretation, Hypothesis Testing, Psychological Studies, Sampling

Thompson, Bruce – Educational Researcher, 1996
Reviews practices regarding tests of statistical significance and policies of the American Educational Research Association (AERA). Decades of misuse of statistical significance testing are described, and revised editorial policies to improve practice are highlighted. Correct interpretation of statistical tests, interpretation of effect sizes, and…
Descriptors: Editing, Educational Research, Effect Size, Statistical Significance
Thompson, Bruce – 1990
The use of multiple comparisons in analysis of variance (ANOVA) is discussed. It is argued that experimentwise Type I error rate inflation can be serious and that its influences are often unnoticed in ANOVA applications. Both classical balanced omnibus and orthogonal planned contrast tests inflate experimentwise error to an identifiable maximum.…
Descriptors: Analysis of Variance, Comparative Analysis, Error of Measurement, Hypothesis Testing