Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Effect Size | 15 |
Research Problems | 15 |
Sampling | 15 |
Research Methodology | 9 |
Statistical Significance | 7 |
Meta Analysis | 5 |
Data Analysis | 4 |
Research Design | 4 |
Statistical Analysis | 4 |
Educational Research | 3 |
Estimation (Mathematics) | 3 |
More ▼ |
Source
Author
Thompson, Bruce | 2 |
Adair, John G. | 1 |
Anderson, Edward R. | 1 |
Bodie, Graham D. | 1 |
Borders, L. DiAnne | 1 |
Boul, Steven | 1 |
Brewer, James K. | 1 |
Bryant, Fred B. | 1 |
Carver, Ronald P. | 1 |
Day, Jim | 1 |
Deal, James E. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 3 |
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Yukyoum; Lee, J. Lucy – Measurement in Physical Education and Exercise Science, 2019
The purposes of this manuscript are to identify common statistical mistakes in sport management, and to provide scholars with suggestions on how to develop and improve the quality of quantitative research. We have reviewed articles published from 2001 to 2017 in the "Journal of Sport Management," "Sport Management Review,"…
Descriptors: Athletics, Research, Research Problems, Statistical Analysis
Gorard, Stephen; Gorard, Jonathan – International Journal of Social Research Methodology, 2016
This brief paper introduces a new approach to assessing the trustworthiness of research comparisons when expressed numerically. The 'number needed to disturb' a research finding would be the number of counterfactual values that can be added to the smallest arm of any comparison before the difference or 'effect' size disappears, minus the number of…
Descriptors: Statistical Significance, Testing, Sampling, Attrition (Research Studies)
Wester, Kelly L.; Borders, L. DiAnne; Boul, Steven; Horton, Evette – Journal of Counseling & Development, 2013
The purpose of this study was to examine the quality of quantitative articles published in the "Journal of Counseling & Development." Quality concerns arose in regard to omissions of psychometric information of instruments, effect sizes, and statistical power. Type VI and II errors were found. Strengths included stated research…
Descriptors: Periodicals, Journal Articles, Counseling, Research
Keaton, Shaughan A.; Bodie, Graham D. – International Journal of Listening, 2013
This article investigates the quality of social scientific listening research that reports numerical data to substantiate claims appearing in the "International Journal of Listening" between 1987 and 2011. Of the 225 published articles, 100 included one or more studies reporting numerical data. We frame our results in terms of eight…
Descriptors: Periodicals, Journal Articles, Listening, Social Science Research

Hinkle, Dennis E.; Oliver, J. Dale – Educational and Psychological Measurement, 1983
In this paper, tables for the appropriate sample sizes are presented and discussed in the context that the determination of the effect size must precede the determination of the sample size. (Author/PN)
Descriptors: Effect Size, Research Methodology, Research Needs, Research Problems

Hedges, Larry V. – Journal of Educational Statistics, 1984
If the quantitative result of a study is observed only when the mean difference is statistically significant, the observed mean difference, variance, and effect size are biased estimators of corresponding population parameters. The exact distribution of sample effect size and the maximum likelihood estimator of effect size are derived. (Author/BW)
Descriptors: Effect Size, Estimation (Mathematics), Maximum Likelihood Statistics, Meta Analysis

Orwin, Robert G. – Journal of Educational Statistics, 1983
Rosenthan's (1979) concept of fail-safe N has thus far been applied to probability levels exclusively. This note introduces a fail-safe N for effect size. (Author)
Descriptors: Effect Size, Meta Analysis, Research Design, Research Problems

Carver, Ronald P. – Journal of Experimental Education, 1993
Four things are recommended to minimize the influence or importance of statistical significance testing. Researchers must not neglect to add "statistical" to significant and could interpret results before giving p-values. Effect sizes should be reported with measures of sampling error, and replication can be built into the design. (SLD)
Descriptors: Educational Researchers, Effect Size, Error of Measurement, Research Methodology
Reynolds, Sharon; Day, Jim – 1984
Monte Carlo studies explored the sampling characteristics of Cohen's d and three approximations to Cohen's d when used as average effect size measures in meta-analysis. Reviews of 10, 100, and 500 studies (M) were simulated, with degrees of freedom (df) varied in seven steps from 8 to 58. In a two independent groups design, samples were obtained…
Descriptors: Computer Simulation, Effect Size, Estimation (Mathematics), Meta Analysis
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Adair, John G.; And Others – 1987
A meta-analysis was conducted on 44 educational studies that used either a (labelled) Hawthorne control group, a manipulation of Hawthorne effects, or a group designed to control for the Hawthorne effect. The sample included published journal articles, ERIC documents or unpublished papers, and dissertations. The studies were coded on 20 variables,…
Descriptors: Control Groups, Educational Research, Effect Size, Experimental Groups

Deal, James E.; Anderson, Edward R. – Journal of Marriage and the Family, 1995
Presentation of quantitative research on the family often suffers from a tendency to interpret findings on a statistical rather than substantive basis. Advocates the use of data analysis that lends itself to an intuitive understanding of the nature of the findings, the strength of the association, and the import of the result. (JPS)
Descriptors: Data Analysis, Effect Size, Evaluation Methods, Goodness of Fit

Brewer, James K.; Sindelar, Paul T. – Journal of Special Education, 1988
From a priori and post hoc data collection perspectives, this paper describes the interrelations among (1) power, alpha, effect size, and sample size for hypothesis testing; and (2) precision, confidence, and sample size for interval estimation. Implications for special education researchers working with convenient samples of fixed size are…
Descriptors: Data Collection, Disabilities, Educational Research, Effect Size
Bryant, Fred B. – 1984
Because research synthesis enables one to determine either the overall effectiveness of a particular treatment or the relative effectiveness of different types of treatments, it is becoming increasingly popular as a tool in program evaluation. Numerous methodological problems arise, however, when research synthesis is applied to studies conducted…
Descriptors: Educational Research, Effect Size, Evaluation Methods, Intervention
Thompson, Bruce – 1994
Too few researchers understand what statistical significance testing does and does not do, and consequently their results are misinterpreted. This Digest explains the concept of statistical significance testing and discusses the meaning of probabilities, the concept of statistical significance, arguments against significance testing,…
Descriptors: Data Analysis, Data Interpretation, Decision Making, Effect Size