NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Tanious, René; Fernández-Castilla, Belén – Journal of Applied Behavior Analysis, 2022
In science in general and in the context of single-case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been…
Descriptors: Intervention, Behavioral Science Research, Replication (Evaluation), Research Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Marcus, Sue M.; Stuart, Elizabeth A.; Wang, Pei; Shadish, William R.; Steiner, Peter M. – Psychological Methods, 2012
Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world…
Descriptors: Educational Practices, Program Effectiveness, Validity, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Manyee; Cook, Thomas D.; Steiner, Peter M. – Journal of Research on Educational Effectiveness, 2015
Some form of a short interrupted time series (ITS) is often used to evaluate state and national programs. An ITS design with a single treatment group assumes that the pretest functional form can be validly estimated and extrapolated into the postintervention period where it provides a valid counterfactual. This assumption is problematic. Ambiguous…
Descriptors: Evaluation Methods, Time, Federal Legislation, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2008
This review focuses on what the author terms a reconsideration of the working credentials of the randomly controlled trial (RCT) design, and includes a discussion of popularly accepted aspects as well as some new perspectives. The author concludes that there is nothing either Imperative or superior about the need for RCT designs, and that an…
Descriptors: Credentials, Research Design, Summative Evaluation, Quasiexperimental Design
Peer reviewed Peer reviewed
Direct linkDirect link
Leviton, Laura C.; Lipsey, Mark W. – New Directions for Evaluation, 2007
"Theory as Method: Small Theories of Treatments," by Mark W. Lipsey, is one of the most influential and highly cited articles to appear in "New Directions for Evaluation." It articulated an approach in which methods for studying causation depend, in large part, on what is known about the theory underlying the program. Lipsey discussed the benefits…
Descriptors: Attribution Theory, Research Design, Program Effectiveness, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
McDuffie, Kimberly A.; Scruggs, Thomas E. – Intervention in School and Clinic, 2008
In response to recent trends and legislation, the concept of implementing evidence-based practices has become a critical component of contemporary schooling. It is important that teachers and families of students with disabilities understand the role that qualitative research plays in determining whether a practice is in fact evidence based.…
Descriptors: Qualitative Research, Disabilities, Special Education, Evidence
Verma, Satish; Burnett, Michael – 1999
Program directors and evaluators need to address the important program accountability question of attribution of outcomes. This discussion is a beginning. Starting with some basics, such as the meaning of the program, approaches to program theory development, and the nature of attribution, the paper suggests three types of attribution. An…
Descriptors: Accountability, Causal Models, Evaluation Methods, Program Development
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention