NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 72 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Matthew J. Mayhew; Christa E. Winkler – Journal of Postsecondary Student Success, 2024
Higher education professionals often are tasked with providing evidence to stakeholders that programs, services, and practices implemented on their campuses contribute to student success. Furthermore, in the absence of a solid base of evidence related to effective practices, higher education researchers and practitioners are left questioning what…
Descriptors: Higher Education, Educational Practices, Evidence Based Practice, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah – Journal of Research on Educational Effectiveness, 2014
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Descriptors: Probability, Inferences, Eligibility, Recruitment
Tonbuloglu, Betül; Gürol, Aysun – Online Submission, 2016
Evaluation of opinions and satisfaction levels of students who continue their education through distance education programs is crucial in detecting the problems encountered in the programs and in improving them. The purpose of this study is to identify student evaluations and satisfaction levels concerning the pedagogical dimension of distance…
Descriptors: Student Attitudes, Graduate Students, Undergraduate Students, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Dubois, Cathy; Long, Lori – International Journal on E-Learning, 2012
E-learning researchers face considerable challenges in creating meaningful and generalizable studies due to the complex nature of this dynamic training medium. Our experience in conducting workplace e-learning research led us to create this guide for planning research on e-learning. We share the unanticipated complications we encountered in our…
Descriptors: Electronic Learning, Course Content, Instructional Design, Program Implementation
Deke, John; Dragoset, Lisa – Mathematica Policy Research, Inc., 2012
The regression discontinuity design (RDD) has the potential to yield findings with causal validity approaching that of the randomized controlled trial (RCT). However, Schochet (2008a) estimated that, on average, an RDD study of an education intervention would need to include three to four times as many schools or students as an RCT to produce…
Descriptors: Research Design, Elementary Secondary Education, Regression (Statistics), Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bell, Stephen H.; Puma, Michael J.; Cook, Ronna J.; Heid, Camilla A. – Society for Research on Educational Effectiveness, 2013
Access to Head Start has been shown to improve children's preschool experiences and school readiness on selected factors through the end of 1st grade. Two more years of follow-up, through the end of 3rd grade, can now be examined to determine whether these effects continue into the middle elementary grades. The statistical design and impact…
Descriptors: Evaluation Methods, Data Analysis, Randomized Controlled Trials, Sampling
Spybrook, Jessaca; Lininger, Monica; Cullen, Anne – Society for Research on Educational Effectiveness, 2011
The purpose of this study is to extend the work of Spybrook and Raudenbush (2009) and examine how the research designs and sample sizes changed from the planning phase to the implementation phase in the first wave of studies funded by IES. The authors examine the impact of the changes in terms of the changes in the precision of the study from the…
Descriptors: Evaluation Criteria, Sampling, Research Design, Planning
Goldring, Ellen; Grissom, Jason A.; Neumerski, Christine M.; Murphy, Joseph; Blissett, Richard; Porter, Andy – Wallace Foundation, 2015
This three-volume report describes the "SAM (School Administration Manager) process," an approach that about 700 schools around the nation are using to direct more of principals' time and effort to improve teaching and learning in classrooms. Research has shown that a principal's instructional leadership is second only to teaching among…
Descriptors: Instructional Leadership, Principals, Administrator Role, Educational Improvement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5