Descriptor
Monte Carlo Methods | 10 |
Sample Size | 5 |
Simulation | 5 |
Comparative Analysis | 4 |
Effect Size | 4 |
Research Methodology | 4 |
Analysis of Variance | 3 |
Analysis of Covariance | 2 |
Computer Simulation | 2 |
Correlation | 2 |
Probability | 2 |
More ▼ |
Source
Author
McLean, James E. | 10 |
Barnette, J. Jackson | 8 |
Wu, Yi-Cheng | 2 |
Publication Type
Speeches/Meeting Papers | 10 |
Reports - Evaluative | 6 |
Reports - Research | 4 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Barnette, J. Jackson; McLean, James E. – 2000
Eta-Squared (ES) is often used as a measure of strength of association of an effect, a measure often associated with effect size. It is also considered the proportion of total variance accounted for by an independent variable. It is simple to compute and interpret. However, it has one critical weakness cited by several authors (C. Huberty, 1994;…
Descriptors: Effect Size, Monte Carlo Methods, Sampling, Statistical Bias
Barnette, J. Jackson; McLean, James E. – 1999
The purpose of this study was to determine: (1) the extent to which effect sizes vary by chance; (2) the proportion of standardized effect sizes that achieve or exceed commonly used criteria for small, medium, and large effect sizes; (3) whether standardized effect sizes are random or systematic across numbers of groups and sample sizes; and (4)…
Descriptors: Criteria, Effect Size, Monte Carlo Methods, Prediction
Barnette, J. Jackson; McLean, James E. – 1999
Four of the most commonly used multiple comparison procedures were compared for pairwise comparisons and relative to control of per-experiment and experimentwise Type I errors when conducted as protected or unprotected tests. The methods are: (1) Dunn-Bonferroni; (2) Dunn-Sidak; (3) Holm's sequentially rejective; and (4) Tukey's honestly…
Descriptors: Comparative Analysis, Monte Carlo Methods, Research Methodology, Selection
Barnette, J. Jackson; McLean, James E. – 1998
Conventional wisdom suggests the omnibus F-test needs to be significant before conducting post-hoc pairwise multiple comparisons. However, there is little empirical evidence supporting this practice. Protected tests are conducted only after a significant omnibus F-test while unprotected tests are conducted without regard to the significance of the…
Descriptors: Comparative Analysis, Monte Carlo Methods, Research Methodology, Sample Size
Barnette, J. Jackson; McLean, James E. – 2000
The level of standardized effect sizes obtained by chance and the use of significance tests to guard against spuriously high standardized effect sizes were studied. The concept of the "protected effect size" is also introduced. Monte Carlo methods were used to generate data for the study using random normal deviates as the basis for sample means…
Descriptors: Effect Size, Monte Carlo Methods, Simulation, Statistical Significance
Barnette, J. Jackson; McLean, James E. – 1997
J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…
Descriptors: Analysis of Variance, Comparative Analysis, Monte Carlo Methods, Probability
Barnette, J. Jackson; McLean, James E. – 2000
The probabilities of attaining varying magnitudes of standardized effect sizes by chance and when protected by a 0.05 level statistical test were studied. Monte Carlo procedures were used to generate standardized effect sizes in a one-way analysis of variance situation with 2 through 5, 6, 8, and 10 groups with selected sample sizes from 5 to 500.…
Descriptors: Computer Simulation, Effect Size, Monte Carlo Methods, Probability
Barnette, J. Jackson; McLean, James E. – 1998
Tukey's Honestly Significant Difference (HSD) procedure (J. Tukey, 1953) is probably the most recommended and used procedure for controlling Type I error rate when making multiple pairwise comparisons as follow-ups to a significant omnibus F test. This study compared observed Type I errors with nominal alphas of 0.01, 0.05, and 0.10 compared for…
Descriptors: Comparative Analysis, Error of Measurement, Monte Carlo Methods, Research Methodology
Wu, Yi-Cheng; McLean, James E. – 1994
The most widely used procedures to harness the power of a concomitant (nuisance) variable are block designs and analysis of covariance (ANCOVA). This study attempted to provide a scientific foundation on which to base decisions on whether to block or covary and how many blocks to be used if blocking is selected. Monte Carlo generated data were…
Descriptors: Analysis of Covariance, Analysis of Variance, Correlation, Decision Making
Wu, Yi-Cheng; McLean, James E. – 1993
By employing a concomitant variable, researchers can reduce the error, increase the precision, and maximize the power of an experimental design. Blocking and analysis of covariance (ANCOVA) are most often used to harness the power of a concomitant variable. Whether to block or covary and how many blocks to be used if a block design is chosen…
Descriptors: Analysis of Covariance, Analysis of Variance, Computer Simulation, Correlation