NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)10
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Ham, Amanda D.; Huggins-Hoyt, Kimberly Y.; Pettus, Joelle – Research on Social Work Practice, 2016
Objectives: This study examined how evaluation and intervention research (IR) studies assessed statistical change to ascertain effectiveness. Methods: Studies from six core social work journals (2009-2013) were reviewed (N = 1,380). Fifty-two evaluation (n= 27) and intervention (n = 25) studies met the inclusion criteria. These studies were…
Descriptors: Social Work, Program Effectiveness, Intervention, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Kaniuka, Theodore S.; Vitale, Michael R.; Romance, Nancy R. – Issues in Educational Research, 2013
Successful school reform is dependent on the quality of decisions made by educational leaders. In such decision making, educational leaders are charged with using sound research findings as the basis for choosing school reform initiatives. As part of the debate regarding the usability of various evaluative research designs in providing information…
Descriptors: Program Effectiveness, Intervention, Reading Programs, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson-Beck, Victoria – Active Learning in Higher Education, 2011
Classroom assessment techniques (CATs) are teaching strategies that provide formative assessments of student learning. It has been argued that the use of CATs enhances and improves student learning. Although the various types of CATs have been extensively documented and qualitatively studied, there appears to be little quantitative research…
Descriptors: Experimental Groups, Student Evaluation, Program Effectiveness, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fergy, Sue; Marks-Maran, Di; Ooms, Ann; Shapcott, Jean; Burke, Linda – Journal of Further and Higher Education, 2011
The Academic, Personal and Professional Learning (APPL) model of support for student nurses was developed and implemented as a pilot project in the Faculty of Health and Social Care Sciences of a university in response to a number of internal and external drivers. The common theme across these drivers was the enhancement of the social, academic…
Descriptors: Nursing Students, College Freshmen, Evaluation Research, Social Integration
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Raudenbush, Stephen W. – Educational Evaluation and Policy Analysis, 2009
This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…
Descriptors: Research Design, Research Methodology, Effect Size, Correlation
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Herzinger, Caitlin V.; Campbell, Jonathan M. – Journal of Autism and Developmental Disorders, 2007
There has been much research concerning functional assessment over the past 20 years, but several important research considerations have yet to be explained. One is the comparison of different types of functional assessment (e.g., experimental functional analysis and non-experimental functional assessment). The current study aims to compare the…
Descriptors: Functional Behavioral Assessment, Behavior Problems, Comparative Analysis, Research Methodology
PDF pending restoration PDF pending restoration
Campbell, Donald T. – 1976
Program impact methodology--usually referred to as evaluation research--is described as it is developing in the United States. Several problems face the field of evaluation research. First, those issues grouped as "meta-scientific" include: (1) the distinction between qualitative and quantitative studies; (2) the separation of implementation and…
Descriptors: Evaluation Methods, Evaluation Problems, Evaluation Research, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thompson, Marnie; Goe, Laura; Paek, Pamela; Ponte, Eva – ETS Research Report Series, 2004
This report is the first of four that stem from a study of The Impact of Approved Induction Programs on Student Learning (IAIPSL), conducted by Educational Testing Service (ETS) and funded by the California Commission on Teacher Credentialing (CCTC). The IAIPSL study began in July 2002 and continued through April 2004. The purpose of the study is…
Descriptors: Beginning Teachers, Program Effectiveness, Formative Evaluation, Beginning Teacher Induction