NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 31 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Laura Vernikoff; Emilie Mitescu Reagan – Review of Research in Education, 2024
Quantitative education research is often perceived to be "objective" or "neutral." However, quantitative research has been and continues to be used to perpetuate inequities; these inequities arise as both intended effects and unintended side effects of traditional quantitative research. In this review of the literature, we…
Descriptors: Educational Research, Educational Researchers, Research Methodology, Research Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pogrow, Stanley – Educational Leadership and Administration: Teaching and Program Development, 2020
It is time to reform the quantitative methods courses in leadership programs -- typically, these are statistics courses with arcane statistics textbooks. There is growing evidence that these "rigorous" scientific methods actually mislead practice because the vast majority of practices found to be "effective" or…
Descriptors: Leadership Training, Educational Change, Statistics, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jane E. Miller – Numeracy, 2023
Students often believe that statistical significance is the only determinant of whether a quantitative result is "important." In this paper, I review traditional null hypothesis statistical testing to identify what questions inferential statistics can and cannot answer, including statistical significance, effect size and direction,…
Descriptors: Statistical Significance, Holistic Approach, Statistical Inference, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Slavin, Robert E.; Cheung, Alan C. K. – Journal of Education for Students Placed at Risk, 2017
Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…
Descriptors: Randomized Controlled Trials, Evaluation Methods, Recruitment, Attrition (Research Studies)
Peer reviewed Peer reviewed
Direct linkDirect link
Piccone, Jason E. – Journal of Correctional Education, 2015
The effective evaluation of correctional programs is critically important. However, research in corrections rarely allows for the randomization of offenders to conditions of the study. This limitation compromises internal validity, and thus, causal conclusions can rarely be drawn. Increasingly, researchers are employing propensity score matching…
Descriptors: Correctional Education, Program Evaluation, Probability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
LeMire, Steven D. – Journal of Statistics Education, 2010
This paper proposes an argument framework for the teaching of null hypothesis statistical testing and its application in support of research. Elements of the Toulmin (1958) model of argument are used to illustrate the use of p values and Type I and Type II error rates in support of claims about statistical parameters and subject matter research…
Descriptors: Hypothesis Testing, Relationship, Statistical Significance, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lyons, Paul R. – Journal of European Industrial Training, 2011
Purpose: This paper aims to complement an earlier article (2010) in "Journal of European Industrial Training" in which the description and theory bases of scenistic methods were presented. This paper also offers a description of scenistic methods and information on theory bases. However, the main thrust of this paper is to describe, give suggested…
Descriptors: Employees, Cooperative Learning, On the Job Training, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Bruce – Middle Grades Research Journal, 2009
The present article provides a primer on using effect sizes in research. A small heuristic data set is used in order to make the discussion concrete. Additionally, various admonitions for best practice in reporting and interpreting effect sizes are presented. Among these is the admonition to not use Cohen's benchmarks for "small," "medium," and…
Descriptors: Educational Research, Effect Size, Computation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Guang Yong – Psychological Methods, 2007
Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…
Descriptors: Intervals, Effect Size, Research Methodology, Correlation
Newman, Isadore; McNeil, Keith; Fraas, John W. – 2003
Over the last few years, there has been evolution, although not a linear one, that has progressed from an emphasis on statistical significance to an emphasis on effect size to an emphasis on both of these concepts to what is believed to be a pragmatic emphasis on replicability. This paper presented two methods of estimating a study's replicability…
Descriptors: Effect Size, Research Methodology, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Strang, Kenneth David – Practical Assessment, Research & Evaluation, 2009
This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…
Descriptors: Multicultural Education, Computer Software, Multiple Regression Analysis, Multidimensional Scaling
Onwuegbuzie, Anthony J.; Daniel, Larry G. – 2000
The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…
Descriptors: Correlation, Generalization, Reliability, Research Methodology
Hetrick, Sam – 1999
Magnitude of effect (ME) statistics are an important alternative to statistical significance. Why methodologists encourage the use of ME indices as interpretation aids is explained, and different types of ME statistics are discussed. The basic concepts underlying effect size measures are reviewed, and how to compute them from published reports…
Descriptors: Computation, Effect Size, Meta Analysis, Research Methodology
Daniel, Larry G.; Onwuegbuzie, Anthony J. – 2000
This paper proposes a new typology for understanding common research errors that expands on the four types of error commonly discussed in the research literature. Examples are presented to illustrate Type I and Type II errors, errors related to the interpretation of statistically significant and nonsignificant results respectively, with attention…
Descriptors: Classification, Error Patterns, Research Methodology, Research Problems
McLean, James E.; Ernest, James M. – Research in the Schools, 1998
Although statistical significance testing as the sole basis for result interpretation is a flawed practice, significance tests can be useful as one of three criteria that must be demonstrated to establish a position empirically. Statistical significance testing provides evidence that an event did not happen by chance but gives no evidence of the…
Descriptors: Educational Research, Hypothesis Testing, Research Methodology, Statistical Significance
Previous Page | Next Page ยป
Pages: 1  |  2  |  3