Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 11 |
Descriptor
Causal Models | 12 |
Evaluation Methods | 12 |
Research Design | 12 |
Research Methodology | 7 |
Inferences | 6 |
Program Effectiveness | 5 |
Program Evaluation | 5 |
Comparative Analysis | 4 |
Intervention | 4 |
Regression (Statistics) | 4 |
Validity | 4 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 10 |
Reports - Descriptive | 6 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Guides - Non-Classroom | 1 |
Opinion Papers | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
Grade 8 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Manolov, Rumen; Tanious, René; Fernández-Castilla, Belén – Journal of Applied Behavior Analysis, 2022
In science in general and in the context of single-case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been…
Descriptors: Intervention, Behavioral Science Research, Replication (Evaluation), Research Design
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Marcus, Sue M.; Stuart, Elizabeth A.; Wang, Pei; Shadish, William R.; Steiner, Peter M. – Psychological Methods, 2012
Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world…
Descriptors: Educational Practices, Program Effectiveness, Validity, Causal Models
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Wong, Manyee; Cook, Thomas D.; Steiner, Peter M. – Journal of Research on Educational Effectiveness, 2015
Some form of a short interrupted time series (ITS) is often used to evaluate state and national programs. An ITS design with a single treatment group assumes that the pretest functional form can be validly estimated and extrapolated into the postintervention period where it provides a valid counterfactual. This assumption is problematic. Ambiguous…
Descriptors: Evaluation Methods, Time, Federal Legislation, Educational Legislation
Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2008
This review focuses on what the author terms a reconsideration of the working credentials of the randomly controlled trial (RCT) design, and includes a discussion of popularly accepted aspects as well as some new perspectives. The author concludes that there is nothing either Imperative or superior about the need for RCT designs, and that an…
Descriptors: Credentials, Research Design, Summative Evaluation, Quasiexperimental Design
Leviton, Laura C.; Lipsey, Mark W. – New Directions for Evaluation, 2007
"Theory as Method: Small Theories of Treatments," by Mark W. Lipsey, is one of the most influential and highly cited articles to appear in "New Directions for Evaluation." It articulated an approach in which methods for studying causation depend, in large part, on what is known about the theory underlying the program. Lipsey discussed the benefits…
Descriptors: Attribution Theory, Research Design, Program Effectiveness, Causal Models
McDuffie, Kimberly A.; Scruggs, Thomas E. – Intervention in School and Clinic, 2008
In response to recent trends and legislation, the concept of implementing evidence-based practices has become a critical component of contemporary schooling. It is important that teachers and families of students with disabilities understand the role that qualitative research plays in determining whether a practice is in fact evidence based.…
Descriptors: Qualitative Research, Disabilities, Special Education, Evidence
Verma, Satish; Burnett, Michael – 1999
Program directors and evaluators need to address the important program accountability question of attribution of outcomes. This discussion is a beginning. Starting with some basics, such as the meaning of the program, approaches to program theory development, and the nature of attribution, the paper suggests three types of attribution. An…
Descriptors: Accountability, Causal Models, Evaluation Methods, Program Development
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention