Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 17 |
Descriptor
Effect Size | 21 |
Sampling | 21 |
Computation | 7 |
Sample Size | 7 |
Educational Research | 6 |
Research Methodology | 6 |
Statistical Analysis | 6 |
Correlation | 5 |
Intervals | 5 |
Regression (Statistics) | 5 |
Data Analysis | 4 |
More ▼ |
Source
Author
Allen, Jeff | 1 |
Banjanovic, Erin S. | 1 |
Bonnett, Douglas G. | 1 |
Boyd, Brian A. | 1 |
Braeken, Johan | 1 |
Brewer, James K. | 1 |
Brunner, Martin | 1 |
Calzada, Maria E. | 1 |
De Corte, Wilfried | 1 |
Deke, John | 1 |
Doorey, Nancy A. | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 21 |
Journal Articles | 16 |
Speeches/Meeting Papers | 2 |
Books | 1 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Elementary Secondary Education | 2 |
Adult Education | 1 |
High Schools | 1 |
Higher Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 4 |
Practitioners | 1 |
Teachers | 1 |
Location
Louisiana | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
van Laar, Saskia; Braeken, Johan – Practical Assessment, Research & Evaluation, 2021
Despite the sensitivity of fit indices to various model and data characteristics in structural equation modeling, these fit indices are used in a rigid binary fashion as a mere rule of thumb threshold value in a search for model adequacy. Here, we address the behavior and interpretation of the popular Comparative Fit Index (CFI) by stressing that…
Descriptors: Goodness of Fit, Structural Equation Models, Sampling, Sample Size
Brunner, Martin; Keller, Lena; Stallasch, Sophie E.; Kretschmann, Julia; Hasl, Andrea; Preckel, Franzis; Lüdtke, Oliver; Hedges, Larry V. – Research Synthesis Methods, 2023
Descriptive analyses of socially important or theoretically interesting phenomena and trends are a vital component of research in the behavioral, social, economic, and health sciences. Such analyses yield reliable results when using representative individual participant data (IPD) from studies with complex survey designs, including educational…
Descriptors: Meta Analysis, Surveys, Research Design, Educational Research
Toste, Jessica R.; Logan, Jessica A. R.; Shogren, Karrie A.; Boyd, Brian A. – Exceptional Children, 2023
Group design research studies can provide evidence to draw conclusions about "what works," "for whom," and "under what conditions" in special education. The quality indicators introduced by Gersten and colleagues (2005) have contributed to increased rigor in group design research, which has provided substantial…
Descriptors: Research Design, Educational Research, Special Education, Educational Indicators
Banjanovic, Erin S.; Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2016
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Descriptors: Computation, Statistical Analysis, Effect Size, Sampling
Vaske, Jerry J. – Sagamore-Venture, 2019
Data collected from surveys can result in hundreds of variables and thousands of respondents. This implies that time and energy must be devoted to (a) carefully entering the data into a database, (b) running preliminary analyses to identify any problems (e.g., missing data, potential outliers), (c) checking the reliability and validity of the…
Descriptors: Surveys, Theories, Hypothesis Testing, Effect Size
Gorard, Stephen; Gorard, Jonathan – International Journal of Social Research Methodology, 2016
This brief paper introduces a new approach to assessing the trustworthiness of research comparisons when expressed numerically. The 'number needed to disturb' a research finding would be the number of counterfactual values that can be added to the smallest arm of any comparison before the difference or 'effect' size disappears, minus the number of…
Descriptors: Statistical Significance, Testing, Sampling, Attrition (Research Studies)
Pustejovsky, James Eric – AERA Online Paper Repository, 2017
Methods for meta-analyzing single-case designs (SCDs) are needed in order to inform evidence based practice in special education and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using…
Descriptors: Effect Size, Research Design, Meta Analysis, Observation
Drummond, Gordon B.; Vowler, Sarah L. – Advances in Physiology Education, 2012
In this article, the authors talk about variation and how variation between measurements may be reduced if sampling is not random. They also talk about replication and its variants. A replicate is a repeated measurement from the same experimental unit. An experimental unit is the smallest part of an experiment or a study that can be subject to a…
Descriptors: Multivariate Analysis, Classroom Communication, Sampling, Physiology
Calzada, Maria E.; Gardner, Holly – Mathematics and Computer Education, 2011
The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…
Descriptors: Intervals, Effect Size, Simulation, Undergraduate Students
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Peugh, James L. – Journal of School Psychology, 2010
Collecting data from students within classrooms or schools, and collecting data from students on multiple occasions over time, are two common sampling methods used in educational research that often require multilevel modeling (MLM) data analysis techniques to avoid Type-1 errors. The purpose of this article is to clarify the seven major steps…
Descriptors: Educational Research, Research Methodology, Data Analysis, Academic Achievement
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Steyn, H. S., Jr.; Ellis, S. M. – Multivariate Behavioral Research, 2009
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Descriptors: Effect Size, Multivariate Analysis, Computation, Monte Carlo Methods
Doorey, Nancy A. – Council of Chief State School Officers, 2011
The work reported in this paper reflects a collaborative effort of many individuals representing multiple organizations. It began during a session at the October 2008 meeting of TILSA when a representative of a member state asked the group if any of their programs had experienced unexpected fluctuations in the annual state assessment scores, and…
Descriptors: Testing, Sampling, Expertise, Testing Programs
Bonnett, Douglas G. – Psychological Methods, 2008
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…
Descriptors: Intervals, Hypothesis Testing, Effect Size, Sampling
Previous Page | Next Page »
Pages: 1 | 2