NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pitas, Nicholas; Murray, Alison; Olsen, Max; Graefe, Alan – Journal of Extension, 2017
This article describes a modified importance-performance framework for use in evaluation of recreation-based experiential learning programs. Importance-performance analysis (IPA) provides an effective and readily applicable means of evaluating many programs, but the near universal satisfaction associated with recreation inhibits the use of IPA in…
Descriptors: Experiential Learning, Recreational Programs, Program Evaluation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako – Journal of Research on Educational Effectiveness, 2012
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Descriptors: Program Evaluation, Statistical Analysis, Hierarchical Linear Modeling, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Rosenthal, James A. – Springer, 2011
Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes numerous examples, data sets, and issues that students will encounter in social work practice. The first section introduces basic concepts and terms to…
Descriptors: Statistics, Data Interpretation, Social Work, Social Science Research
Peer reviewed Peer reviewed
Sechrest, Lee, Ed. – New Directions for Program Evaluation, 1993
Two chapters of this issue consider critical multiplism as a research strategy with links to meta analysis and generalizability theory. The unifying perspective it can provide for quantitative and qualitative evaluation is discussed. The third chapter explores meta analysis as a way to improve causal inferences in nonexperimental data. (SLD)
Descriptors: Causal Models, Evaluation Methods, Generalizability Theory, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention