NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 82 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Peer reviewed Peer reviewed
Direct linkDirect link
Geuke, Gemma G. M.; Maric, Marija; Miocevic, Milica; Wolters, Lidewij H.; de Haan, Else – New Directions for Child and Adolescent Development, 2019
The major aim of this manuscript is to bring together two important topics that have recently received much attention in child and adolescent research, albeit separately from each other: single-case experimental designs and statistical mediation analysis. Single-case experimental designs (SCEDs) are increasingly recognized as a valuable…
Descriptors: Children, Adolescents, Research, Case Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kulik, James A.; Fletcher, J. D. – Review of Educational Research, 2016
This review describes a meta-analysis of findings from 50 controlled evaluations of intelligent computer tutoring systems. The median effect of intelligent tutoring in the 50 evaluations was to raise test scores 0.66 standard deviations over conventional levels, or from the 50th to the 75th percentile. However, the amount of improvement found in…
Descriptors: Intelligent Tutoring Systems, Meta Analysis, Computer Assisted Instruction, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Jodi Jacobson; Hopkins, Karen; Osteen, Philip; Callahan, Christine; Hageman, Sally; Ko, Jungyai – Journal of Social Work Education, 2017
In social work and other community-based human services settings, clients often present with complex financial problems. As a need for more formal training is beginning to be addressed, evaluation of existing training is important, and this study evaluates outcomes from the Financial Stability Pathway (FSP) project. Designed to prepare…
Descriptors: Social Work, Caseworkers, Human Services, Financial Needs
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Peck, Laura R. – American Journal of Evaluation, 2013
To answer "what works?" questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both…
Descriptors: Program Effectiveness, Experimental Groups, Control Groups, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Viviers, Herman Albertus – Industry and Higher Education, 2016
The primary objective of this article is to evaluate the design variables of a newly developed teaching intervention, "The Amazing Tax Race". It comprises a race against time in which accounting students participate within teams in multiple tax-related activities so that they are exposed to pervasive skills. The findings provide…
Descriptors: Foreign Countries, Undergraduate Students, Accounting, Qualitative Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Harmon, Hobart; Tate, Veronica; Stevens, Jennifer; Wilborn, Sandy; Adams, Sue – Grantee Submission, 2018
The goal of the Rural Math Excel Partnership (RMEP) project, a development project funded by the U.S. Department of Education Investing in Innovation (i3) grant program, was to develop a model of shared responsibility among families, teachers, and communities in rural areas as collective support for student success in and preparation for advanced…
Descriptors: Rural Schools, Partnerships in Education, Program Evaluation, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sørlie, Mari-Anne; Ogden, Terje – International Journal of School & Educational Psychology, 2014
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Descriptors: Validity, Research Design, Experiments, Prevention
Peer reviewed Peer reviewed
Direct linkDirect link
Burns, Catherine E. – American Journal of Evaluation, 2015
In the current climate of increasing fiscal and clinical accountability, information is required about overall program effectiveness using clinical data. These requests present a challenge for programs utilizing single-subject data due to the use of highly individualized behavior plans and behavioral monitoring. Subsequently, the diversity of the…
Descriptors: Program Evaluation, Program Effectiveness, Data Analysis, Research Design
Warner-Richter, Mallory; Lowe, Claire; Tout, Kathryn; Epstein, Dale; Li, Weilin – Child Trends, 2016
The Success By 6® (SB6) initiative is designed to support early care and education centers in improving and sustaining quality in Pennsylvania's Keystone STARS Quality Rating and Improvement System (QRIS). The SB6 evaluation report examines implementation and outcomes. The findings have implications for SB6 continous quality improvement process…
Descriptors: Success, Research Reports, Child Care Centers, Quality Assurance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
DiNardo, John; Lee, David S. – National Bureau of Economic Research, 2010
This chapter provides a selective review of some contemporary approaches to program evaluation. One motivation for our review is the recent emergence and increasing use of a particular kind of "program" in applied microeconomic research, the so-called Regression Discontinuity (RD) Design of Thistlethwaite and Campbell (1960). We organize our…
Descriptors: Research Design, Program Evaluation, Validity, Experiments
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6