Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Effect Size | 18 |
Evaluation Methods | 18 |
Sampling | 18 |
Research Methodology | 12 |
Computation | 6 |
Correlation | 5 |
Data Analysis | 5 |
Educational Research | 5 |
Statistical Analysis | 5 |
Program Effectiveness | 4 |
Program Evaluation | 4 |
More ▼ |
Source
Author
Publication Type
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Comprehensive Employment and… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Drummond, Gordon B.; Vowler, Sarah L. – Advances in Physiology Education, 2012
In this article, the authors talk about variation and how variation between measurements may be reduced if sampling is not random. They also talk about replication and its variants. A replicate is a repeated measurement from the same experimental unit. An experimental unit is the smallest part of an experiment or a study that can be subject to a…
Descriptors: Multivariate Analysis, Classroom Communication, Sampling, Physiology
Itang'ata, Mukaria J. J. – ProQuest LLC, 2013
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Descriptors: Comparative Analysis, Probability, Statistical Bias, Monte Carlo Methods
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Kypri, Kypros – Substance Abuse, 2007
The research literature on screening and brief intervention (SBI) for unhealthy alcohol use is large and diverse. More than 50 clinical trials and 9 systematic reviews have been published on SBI in a range of healthcare settings, and via a variety of delivery approaches, in general practice, hospital wards, emergency departments, addiction…
Descriptors: Intervention, Drinking, Research Utilization, Screening Tests
Campo, Stephanie F. – 1988
Three procedures for evaluating the sampling specificity of results are reviewed. These procedures are Tukey's jacknife technique, Efron's bootstrap technique, and cross-validation methods. The jacknife technique uses different subsamples derived from the original total data set to provide empirical estimates of the generalizability of effect…
Descriptors: Comparative Analysis, Effect Size, Estimation (Mathematics), Evaluation Methods

Thompson, Bruce – Educational and Psychological Measurement, 1995
Use of the bootstrap method in a canonical correlation analysis to evaluate the replicability of a study's results is illustrated. More confidence may be vested in research results that replicate. (SLD)
Descriptors: Analysis of Covariance, Correlation, Effect Size, Evaluation Methods
Asraf, Ratnawati Mohd; Brewer, James K. – Australian Educational Researcher, 2004
This article addresses the importance of obtaining a sample of an adequate size for the purpose of testing hypotheses. The logic underlying the requirement for a minimum sample size for hypothesis testing is discussed, as well as the criteria for determining it. Implications for researchers working with convenient samples of a fixed size are also…
Descriptors: Hypothesis Testing, Sample Size, Sampling, Research Methodology
Wang, Wen-Chung – Educational and Psychological Measurement, 2004
The Pearson correlation is used to depict effect sizes in the context of item response theory. Amultidimensional Rasch model is used to directly estimate the correlation between latent traits. Monte Carlo simulations were conducted to investigate whether the population correlation could be accurately estimated and whether the bootstrap method…
Descriptors: Test Length, Sampling, Effect Size, Correlation
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Kelley, Ken – Educational and Psychological Measurement, 2005
The standardized group mean difference, Cohen's "d", is among the most commonly used and intuitively appealing effect sizes for group comparisons. However, reporting this point estimate alone does not reflect the extent to which sampling error may have led to an obtained value. A confidence interval expresses the uncertainty that exists between…
Descriptors: Intervals, Sampling, Integrity, Effect Size

Deal, James E.; Anderson, Edward R. – Journal of Marriage and the Family, 1995
Presentation of quantitative research on the family often suffers from a tendency to interpret findings on a statistical rather than substantive basis. Advocates the use of data analysis that lends itself to an intuitive understanding of the nature of the findings, the strength of the association, and the import of the result. (JPS)
Descriptors: Data Analysis, Effect Size, Evaluation Methods, Goodness of Fit
Bangert-Drowns, Robert L.; Rudner, Lawrence M. – 1991
Meta-analysis is a collection of systematic techniques for resolving apparent contradictions in research findings. Meta-analysts translate results from different studies to a common metric and statistically explore the relations between study characteristics and findings. Since G. Glass first used the term "meta-analysis" in 1976, it has…
Descriptors: Comparative Analysis, Data Collection, Definitions, Educational Research

Bryant, Edward C.; Rupp, Kalman – Evaluation Review, 1987
Estimates of the Comprehensive Employment and Training Act's net impact on participant earnings, using Continuous Longitudinal Manpower Survey data, were compared to a similar sample from the Current Population Survey. The use of multivariate matching and weighting yielded acceptable results. (GDC)
Descriptors: Adult Education, Effect Size, Employment Programs, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1 | 2