NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)2
Education Level
Audience
Researchers5
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
Friedman, Herbert – Educational and Psychological Measurement, 1982
A concise table is presented based on a general measure of magnitude of effect which allows direct determinations of statistical power over a practical range of values and alpha levels. The table also facilitates the setting of the research sample size needed to provide a given degree of power. (Author/CM)
Descriptors: Hypothesis Testing, Power (Statistics), Research Design, Sampling
Peer reviewed Peer reviewed
Overall, John E.; Woodward J. Arthur – Psychometrika, 1974
A procedure for testing heterogeneity of variance is developed which generalizes readily to complex, multi-factor experimental designs. Monte Carlo studies indicate that the Z-variance test statistic presented here yields results equivalent to other familiar tests for heterogeneity of variance in simple one-way designs where comparisons are…
Descriptors: Analysis of Variance, Hypothesis Testing, Research Design, Sampling
Peer reviewed Peer reviewed
Meyer, Donald L. – American Educational Research Journal, 1974
See TM 501 202-3 and EJ 060 883 for related articles. (MLP)
Descriptors: Bayesian Statistics, Hypothesis Testing, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Hsu, Tse-Chi; Sebatane, E. Molapi – Journal of Experimental Education, 1979
A Monte Carlo technique was used to investigate the effect of the differences in covariate means among treatment groups on the significance level and the power of the F-test of the analysis of covariance. (Author/GDC)
Descriptors: Analysis of Covariance, Correlation, Research Design, Research Problems
Cook, Colleen – 2000
Against an historical backdrop, this paper summarizes four uses of intraclass correlation of importance to contemporary researchers in the behavioral sciences. First, it shows how the intraclass correlation coefficient can be used to adjust confidence intervals for statistical significance testing when data are intracorrelated and the independence…
Descriptors: Association (Psychology), Behavioral Sciences, Correlation, Interrater Reliability
Peer reviewed Peer reviewed
Suen, Hoi K. – Topics in Early Childhood Special Education, 1992
This commentary on EC 603 695 argues that significance testing is a necessary but insufficient condition for positivistic research, that judgment-based assessment and single-subject research are not substitutes for significance testing, and that sampling fluctuation should be considered as one of numerous epistemological concerns in any…
Descriptors: Evaluation Methods, Evaluative Thinking, Research Design, Research Methodology
Giroir, Mary M.; Davidson, Betty M. – 1989
Replication is important to viable scientific inquiry; results that will not replicate or generalize are of very limited value. Statistical significance enables the researcher to reject or not reject the null hypothesis according to the sample results obtained, but statistical significance does not indicate the probability that results will be…
Descriptors: Estimation (Mathematics), Generalizability Theory, Hypothesis Testing, Probability
Hoedt, Kenneth C.; And Others – 1984
Using a Monte Carlo approach, comparison was made between traditional procedures and a multiple linear regression approach to test for differences between values of r sub 1 and r sub 2 when sample data were dependent and independent. For independent sample data, results from a z-test were compared to results from using multiple linear regression.…
Descriptors: Correlation, Hypothesis Testing, Monte Carlo Methods, Multiple Regression Analysis
Bennett, Richard P. – 1983
This study examines the relative effectiveness of two means of analyzing the pre-test/post-test control group experimental design. Samples were randomly drawn from a standardized normal population and assigned to one of the four cells of the design. A set of experimental differences were induced in the post-test experimental cell. Each case was…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Pretests Posttests
Peer reviewed Peer reviewed
Da Prato, Robert A. – Topics in Early Childhood Special Education, 1992
This paper argues that judgment-based assessment of data from multiply replicated single-subject or small-N studies should replace normative-based (p=less than 0.05) assessment of large-N research in the clinical sciences, and asserts that inferential statistics should be abandoned as a method of evaluating clinical research data. (Author/JDD)
Descriptors: Evaluation Methods, Evaluative Thinking, Norms, Research Design
Peer reviewed Peer reviewed
Eiting, Mindert H.; Mellenbergh, Gideon J. – Multivariate Behavioral Research, 1980
Using reasonable values for the parameters in both null and alternative hypotheses about covariance matrices, an optimal and feasible combination of number of subjects, significance level, and power of the test were determined for an empirical study of the measurement of musical ability. (Author/BW)
Descriptors: Education Majors, Foreign Countries, Higher Education, Hypothesis Testing
Peer reviewed Peer reviewed
Harris, Richard J.; Quade, Dana – Journal of Educational Statistics, 1992
A method is proposed for calculating the sample size needed to achieve acceptable statistical power with a given test. The minimally important difference significant (MIDS) criterion for sample size is explained and supported with recommendations for determining sample size. The MIDS criterion is computationally simple and easy to explain. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Experimental Groups, Mathematical Models
Miller, John K.; Knapp, Thomas R.
The testing of research hypotheses is directly comparable to the dichotomous decision-making of medical diagnosis or jury trials--not ill/ill, or innocent/guilty decisions. There are costs in both kinds of error, type I errors of falsely rejecting a null hypothesis or type II errors of falsely rejecting an alternative hypothesis. It is important…
Descriptors: Bayesian Statistics, Decision Making, Educational Research, Hypothesis Testing
Previous Page | Next Page ยป
Pages: 1  |  2