Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Hypothesis Testing | 43 |
Research Design | 43 |
Sampling | 43 |
Statistical Analysis | 17 |
Analysis of Variance | 12 |
Statistical Significance | 12 |
Research Methodology | 10 |
Power (Statistics) | 9 |
Error of Measurement | 8 |
Reliability | 8 |
Research Problems | 7 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 24 |
Journal Articles | 17 |
Speeches/Meeting Papers | 9 |
Reports - Evaluative | 4 |
Reports - Descriptive | 2 |
Books | 1 |
Guides - General | 1 |
Guides - Non-Classroom | 1 |
Opinion Papers | 1 |
Reports - General | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Researchers | 6 |
Practitioners | 1 |
Location
Australia | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Meltzoff, Julian; Cooper, Harris – APA Books, 2017
Could the research you read be fundamentally flawed? Could critical defects in methodology slip by you undetected? To become informed consumers of research, students need to thoughtfully evaluate the research they read rather than accept it without question. This second edition of a classic text gives students the tools they need to apply critical…
Descriptors: Critical Thinking, Research Methodology, Evaluative Thinking, Critical Reading
Haegele, Justin A.; Hodge, Samuel R. – Physical Educator, 2015
Emerging professionals, particularly senior-level undergraduate and graduate students in kinesiology who have an interest in physical education for individuals with and without disabilities, should understand the basic assumptions of the quantitative research paradigm. Knowledge of basic assumptions is critical for conducting, analyzing, and…
Descriptors: Statistical Analysis, Educational Research, Physical Education, Adapted Physical Education
Rose, Roderick A.; Bowen, Gary L. – Social Work Research, 2009
In cluster-randomized trials (CRTs) of social work interventions, groups are assigned to treatment conditions. Conducting a power analysis ensures that enough groups are sampled for testing hypotheses. The power analysis method and inputs must be informed by the hypotheses, effects to be tested, and the data analysis plans. The authors present a…
Descriptors: Research Design, Intervention, Heuristics, Statistical Analysis

Pohl, Norval F.; Tsai, San-Yun W. – Educational and Psychological Measurement, 1978
The nature of the approximate chi-square test for hypotheses concerning multinomial probabilities is reviewed. Also, a BASIC computer program for calculating the sample size necessary to control for both Type I and Type II errors in chi-square tests for hypotheses concerning multinomial probabilities is described.
Descriptors: Computer Programs, Hypothesis Testing, Research Design, Sampling

Harris, P. – Psychometrika, 1984
A test for multisample sphericity based on the efficient scores criterion is obtained as an alternative to the likelihood ratio test developed by Mendoza. (Author)
Descriptors: Hypothesis Testing, Research Design, Research Problems, Sampling

Katz, Barry M. – Journal of Educational Statistics, 1978
This paper surveys available techniques and introduces an explicit statement of a new statistic to test for equality of correlated proportions in a polychotomous response design. A set of guidelines for the potential user of the techniques is provided. (Author/CTM)
Descriptors: Classification, Hypothesis Testing, Research Design, Sampling

Friedman, Herbert – Educational and Psychological Measurement, 1982
A concise table is presented based on a general measure of magnitude of effect which allows direct determinations of statistical power over a practical range of values and alpha levels. The table also facilitates the setting of the research sample size needed to provide a given degree of power. (Author/CM)
Descriptors: Hypothesis Testing, Power (Statistics), Research Design, Sampling

Luftig, Jeffrey T.; Norton, Willis P. – Journal of Studies in Technical Careers, 1982
This article builds on an earlier discussion of the importance of the Type II error (beta) and power to the hypothesis testing process (CE 511 484), and illustrates the methods by which sample size calculations should be employed so as to improve the research process. (Author/CT)
Descriptors: Hypothesis Testing, Research Design, Research Methodology, Research Problems

Overall, John E.; Woodward J. Arthur – Psychometrika, 1974
A procedure for testing heterogeneity of variance is developed which generalizes readily to complex, multi-factor experimental designs. Monte Carlo studies indicate that the Z-variance test statistic presented here yields results equivalent to other familiar tests for heterogeneity of variance in simple one-way designs where comparisons are…
Descriptors: Analysis of Variance, Hypothesis Testing, Research Design, Sampling

Meyer, Donald L. – American Educational Research Journal, 1974
See TM 501 202-3 and EJ 060 883 for related articles. (MLP)
Descriptors: Bayesian Statistics, Hypothesis Testing, Power (Statistics), Research Design

Lunneborg, Clifford E.; Tousignant, James P. – Multivariate Behavioral Research, 1985
This paper illustrates an application of Efron's bootstrap to the repeated measures design. While this approach does not require parametric assumptions, it does utilize distributional information in the sample. By appropriately resampling from study data, the bootstrap may determine accurate sampling distributions for estimators, effects, or…
Descriptors: Hypothesis Testing, Research Design, Research Methodology, Sampling

Levy, Kenneth J. – Journal of Experimental Education, 1978
The purpose of this paper is to demonstrate how many more subjects are required to achieve equal power when testing certain hypotheses concerning proportions if the randomized response technique is employed for estimating a population proportion instead of the conventional technique. (Author)
Descriptors: Experimental Groups, Hypothesis Testing, Research Design, Response Style (Tests)
Kroeker, Leonard P. – 1974
The problem of blocking on a status variable was investigated. The one-way fixed-effects analysis of variance, analysis of covariance, and generalized randomized block designs each treat the blocking problem in a different way. In order to compare these designs, it is necessary to restrict attention to experimental situations in which observations…
Descriptors: Analysis of Covariance, Analysis of Variance, Hypothesis Testing, Research Design