Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 14 |
Descriptor
Source
Author
Publication Type
Guides - Non-Classroom | 57 |
Journal Articles | 19 |
Reports - Descriptive | 11 |
Books | 5 |
Reports - Research | 5 |
Tests/Questionnaires | 5 |
Reports - Evaluative | 3 |
Information Analyses | 2 |
Computer Programs | 1 |
ERIC Publications | 1 |
Education Level
Elementary Secondary Education | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Swank, Jacqueline M.; Mullen, Patrick R. – Measurement and Evaluation in Counseling and Development, 2017
The article serves as a guide for researchers in developing evidence of validity using bivariate correlations, specifically construct validity. The authors outline the steps for calculating and interpreting bivariate correlations. Additionally, they provide an illustrative example and discuss the implications.
Descriptors: Correlation, Construct Validity, Guidelines, Data Interpretation
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Creswell, John W. – Pearson Education, Inc., 2015
"Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research" offers a truly balanced, inclusive, and integrated overview of the processes involved in educational research. This text first examines the general steps in the research process and then details the procedures for conducting specific types…
Descriptors: Educational Research, Qualitative Research, Statistical Analysis, Research Methodology
Teacher Incentive Fund, US Department of Education, 2016
The U.S. Department of Education (ED) expects all Teacher Incentive Fund (TIF) grantees to conduct an evaluation of their programs. Experience with earlier rounds of TIF grants has shown that evaluations can provide valuable information for managing and improving TIF-supported activities, as well as evidence that these activities have had a…
Descriptors: Program Evaluation, Questioning Techniques, Qualitative Research, Statistical Analysis
Huerta, Juan Carlos; Hansen, Michele J. – Learning Communities: Research & Practice, 2013
Good assessment is part of all good learning communities, and this article provides a useful set of best practices for learning community assessment planning: (1) articulating agreed-upon learning community program goals; (2) identifying the purpose of assessment (e.g., summative or formative); (3) employing qualitative and quantitative assessment…
Descriptors: Communities of Practice, Evaluation Methods, Best Practices, Planning
Peace Corps, 2014
Data-driven decision making is part of our everyday lives. For example, we examine our children's immunization charts in order to indicate what vaccinations they have received, as well as what vaccinations are missing and expired. We use information, like a chart, to make decisions about what to do and what not to do. Data-driven decision making…
Descriptors: Data, Decision Making, Volunteers, Training
What Works Clearinghouse, 2012
This document provides guidance about how to describe studies and report their findings in a way that is clear, complete, and transparent. This document does not include information about how studies are judged against What Works Clearinghouse evidence standards. For information about What Works Clearinghouse evidence standards, please refer to…
Descriptors: Intervention, Research Reports, Educational Research, Guides
Gill, Matt; Outka, Janeen; McCorkle, Mary – South Dakota Department of Education, 2015
Student growth is one of two essential components of South Dakota's Teacher and Principal Effectiveness Systems. In the state systems, student growth is defined as a positive change in student achievement between two or more points in time. "The South Dakota SLO Handbook" provides support and guidance to public schools and school…
Descriptors: Guides, Public Schools, School Districts, Statistical Analysis
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
What Works Clearinghouse, 2011
With its critical assessments of scientific evidence on the effectiveness of education programs, policies, and practices (referred to as "interventions"), and a range of products summarizing this evidence, the What Works Clearinghouse (WWC) is an important part of the Institute of Education Sciences' strategy to use rigorous and relevant…
Descriptors: Standards, Access to Information, Information Management, Guides

Gellen, M. I.; Hoffman, R. A. – Education, 1984
Discusses the use of the chi-square statistic in educational evaluation based on the assumption that the availability of calculators and computers does not absolve educators from the responsibility of understanding this statistically evaluative technique. Illustrates applicability to an action research study in a middle school setting. (NEC)
Descriptors: Educational Assessment, Elementary Secondary Education, Evaluation Methods, Statistical Analysis
Harris, Karen R. – Diagnostique, 1983
Three procedures that help to determine the degree of confidence we can place in any raw score are discussed: computing the standard error of measurement, computing the estimated true score, and constructing confidence intervals. These three procedures are easy to use and require only elementary mathematical skills. (Author/CL)
Descriptors: Disabilities, Elementary Secondary Education, Evaluation Methods, Statistical Analysis