NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Germany2
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2023
Teachers can use a variety of classroom management practices to help foster a classroom environment in which all students can learn. "Good Behavior Game" is a specific classroom management strategy that aims to improve social skills, minimize disruptive behaviors, and create a positive learning environment. Teachers place students into…
Descriptors: Classroom Techniques, Positive Behavior Supports, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Rebecca; Morrison, Jennifer; Inns, Amanda; Slavin, Robert; Risman, Kelsey – Journal of Research on Educational Effectiveness, 2020
Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties.…
Descriptors: Program Evaluation, Program Effectiveness, Effect Size, Sample Size
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Kulik, James A.; Fletcher, J. D. – Review of Educational Research, 2016
This review describes a meta-analysis of findings from 50 controlled evaluations of intelligent computer tutoring systems. The median effect of intelligent tutoring in the 50 evaluations was to raise test scores 0.66 standard deviations over conventional levels, or from the 50th to the 75th percentile. However, the amount of improvement found in…
Descriptors: Intelligent Tutoring Systems, Meta Analysis, Computer Assisted Instruction, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D. – Educational Evaluation and Policy Analysis, 2013
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Descriptors: Meta Analysis, Preschool Education, Disadvantaged Youth, Outcome Measures
Deke, John; Dragoset, Lisa – Mathematica Policy Research, Inc., 2012
The regression discontinuity design (RDD) has the potential to yield findings with causal validity approaching that of the randomized controlled trial (RCT). However, Schochet (2008a) estimated that, on average, an RDD study of an education intervention would need to include three to four times as many schools or students as an RCT to produce…
Descriptors: Research Design, Elementary Secondary Education, Regression (Statistics), Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg; Rech, Joerg – American Journal of Evaluation, 2014
This article proposes an innovative approach to estimating the counterfactual without the necessity of generating information from either a control group or a before-measure. Building on the idea that program participants are capable of estimating the hypothetical state they would be in had they not participated, the basics of the Roy-Rubin model…
Descriptors: Research Design, Program Evaluation, Research Methodology, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelcey, Ben; Spybrook, Jessaca; Zhang, Jiaqi; Phelps, Geoffrey; Jones, Nathan – Society for Research on Educational Effectiveness, 2015
With research indicating substantial differences among teachers in terms of their effectiveness (Nye, Konstantopoulous, & Hedges, 2004), a major focus of recent research in education has been on improving teacher quality through professional development (Desimone, 2009; Institute of Educations Sciences [IES], 2012; Measures of Effective…
Descriptors: Teacher Effectiveness, Faculty Development, Program Design, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Mengli; Herman, Rebecca – Educational Evaluation and Policy Analysis, 2010
Drawing on our five years of experience developing WWC evidence standards and reviewing studies against those standards as well as current literature on the design of impact studies, we highlight in this paper some of the most critical issues and common pitfalls in designing and conducting impact studies in education, and provide practical…
Descriptors: Clearinghouses, Program Evaluation, Program Effectiveness, Research Methodology
Cheung, Alan C. K.; Slavin, Robert E. – Center for Research and Reform in Education, 2012
This review examines the effectiveness of educational technology applications in improving the reading achievement of struggling readers in elementary schools. The review applies consistent inclusion standards to focus on studies that met high methodological standards. A total of 20 studies based on about 7,000 students in grades K-6 were included…
Descriptors: Reading Achievement, Educational Technology, Reading Difficulties, Reading Programs
Cheung, Alan C. K.; Slavin, Robert E. – Center for Research and Reform in Education, 2012
The purpose of this review is to learn from rigorous evaluations of alternative technology applications how features of using technology programs and characteristics of their evaluations affect reading outcomes for students in grades K-12. The review applies consistent inclusion standards to focus on studies that met high methodological standards.…
Descriptors: Reading Achievement, Elementary Secondary Education, Educational Technology, Meta Analysis
Previous Page | Next Page ยป
Pages: 1  |  2