Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 20 |
Descriptor
Program Evaluation | 20 |
Randomized Controlled Trials | 20 |
Research Design | 20 |
Intervention | 11 |
Educational Research | 9 |
Research Methodology | 9 |
Program Effectiveness | 6 |
Sample Size | 5 |
Statistical Analysis | 5 |
Evaluation Methods | 4 |
Quasiexperimental Design | 4 |
More ▼ |
Source
Author
Deke, John | 2 |
Hedges, Larry V. | 2 |
Kautz, Tim | 2 |
Schauer, Jacob | 2 |
Wei, Thomas | 2 |
A. Brooks Bowden | 1 |
A. Krishnamachari | 1 |
Anglin, Kylie L. | 1 |
Annik M. Sorhaindo | 1 |
Bell, Stephen H. | 1 |
Blakeney, Aly | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Reports - Descriptive | 4 |
Reports - Evaluative | 4 |
Guides - Non-Classroom | 3 |
Books | 1 |
Non-Print Media | 1 |
Reference Materials -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Early Childhood Education | 2 |
Elementary Education | 2 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Primary Education | 1 |
Audience
Policymakers | 1 |
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Chow, Jason C.; Hampton, Lauren H. – Remedial and Special Education, 2019
Interventions often require multiple decisions to improve outcomes for every student. Whether the decision to implement a practice, tailor an existing protocol, or change approaches, these decisions should be based on individual variables and outcomes via a sequence of treatment. To develop adaptive interventions that have sufficient evidence to…
Descriptors: Special Education, Intervention, Program Development, Program Evaluation
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Hedges, Larry V.; Schauer, Jacob – Educational Research, 2018
Background and purpose: Studies of education and learning that were described as experiments have been carried out in the USA by educational psychologists since about 1900. In this paper, we discuss the history of randomised trials in education in the USA in terms of five historical periods. In each period, the use of randomised trials was…
Descriptors: Randomized Controlled Trials, Educational Research, Educational Psychology, Educational History
Hedges, Larry V.; Schauer, Jacob – Grantee Submission, 2018
Background and purpose: Studies of education and learning that were described as experiments have been carried out in the USA by educational psychologists since about 1900. In this paper, we discuss the history of randomised trials in education in the USA in terms of five historical periods. In each period, the use of randomised trials was…
Descriptors: Randomized Controlled Trials, Educational Research, Educational Psychology, Educational History
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Reeves, Barnaby C.; Higgins, Julian P. T.; Ramsay, Craig; Shea, Beverley; Tugwell, Peter; Wells, George A. – Research Synthesis Methods, 2013
Background: Methods need to be further developed to include non-randomised studies (NRS) in systematic reviews of the effects of health care interventions. NRS are often required to answer questions about harms and interventions for which evidence from randomised controlled trials (RCTs) is not available. Methods used to review randomised…
Descriptors: Research Methodology, Research Design, Health Services, Workshops
Ruth Maisey; Svetlana Speight; Chris Bonell; Susan Purdon; Peter Keogh; Ivonne Wollny; Annik M. Sorhaindo; Kaye Wellings – Sage Research Methods Cases, 2014
In 2009, the government's Department for Education commissioned a team of researchers at NatCen Social Research to evaluate the effectiveness of the youth development/teenage pregnancy prevention programme 'Teens and Toddlers'. Previous studies had positive findings but had not been very rigorous in terms of methodology and methods used. We…
Descriptors: Youth Programs, Program Evaluation, Adolescents, Toddlers
Previous Page | Next Page ยป
Pages: 1 | 2