NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Adult Education1
Audience
Researchers5
Location
Laws, Policies, & Programs
Assessments and Surveys
General Social Survey1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A. – International Journal of Behavioral Development, 2018
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
Descriptors: Regression (Statistics), Statistical Analysis, Evaluation Methods, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
An, Weihua; Winship, Christopher – Sociological Methods & Research, 2017
In this article, we review popular parametric models for analyzing panel data and introduce the latest advances in matching methods for panel data analysis. To the extent that the parametric models and the matching methods offer distinct advantages for drawing causal inference, we suggest using both to cross-validate the evidence. We demonstrate…
Descriptors: Causal Models, Statistical Inference, Interviews, Race
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Byrd, Jimmy K. – Educational Administration Quarterly, 2007
Purpose: The purpose of this study was to review research published by Educational Administration Quarterly (EAQ) during the past 10 years to determine if confidence intervals and effect sizes were being reported as recommended by the American Psychological Association (APA) Publication Manual. Research Design: The author examined 49 volumes of…
Descriptors: Research Design, Intervals, Statistical Inference, Effect Size
Peer reviewed Peer reviewed
Suen, Hoi K. – Topics in Early Childhood Special Education, 1992
This commentary on EC 603 695 argues that significance testing is a necessary but insufficient condition for positivistic research, that judgment-based assessment and single-subject research are not substitutes for significance testing, and that sampling fluctuation should be considered as one of numerous epistemological concerns in any…
Descriptors: Evaluation Methods, Evaluative Thinking, Research Design, Research Methodology
Peer reviewed Peer reviewed
Da Prato, Robert A. – Topics in Early Childhood Special Education, 1992
This paper argues that judgment-based assessment of data from multiply replicated single-subject or small-N studies should replace normative-based (p=less than 0.05) assessment of large-N research in the clinical sciences, and asserts that inferential statistics should be abandoned as a method of evaluating clinical research data. (Author/JDD)
Descriptors: Evaluation Methods, Evaluative Thinking, Norms, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
McCaffrey, Daniel F.; Ridgeway, Greg; Morral, Andrew R. – Psychological Methods, 2004
Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate…
Descriptors: Substance Abuse, Causal Models, Adolescents, Statistical Analysis
Blumberg, Carol Joyce – 1989
A subset of Statistical Process Control (SPC) methodology known as Control Charting is introduced. SPC methodology is a collection of graphical and inferential statistics techniques used to study the progress of phenomena over time. The types of control charts covered are the null X (mean), R (Range), X (individual observations), MR (moving…
Descriptors: Charts, Data Analysis, Educational Research, Evaluation Methods
Peer reviewed Peer reviewed
Ottenbacher, Kenneth J. – Journal of Special Education, 1990
The agreement between visual analysis and the results of the split-middle method of trend estimation was examined using a set of 24 stimulus graphs and 30 raters. Results revealed poor agreement between the two methods, and low sensitivity, specificity, and predictive ability for visual analysis in relation to statistical inferences. (JDD)
Descriptors: Elementary Secondary Education, Estimation (Mathematics), Evaluation Methods, Graphs