Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 23 |
Descriptor
| Experiments | 57 |
| Program Evaluation | 57 |
| Research Methodology | 18 |
| Evaluation Methods | 17 |
| Program Effectiveness | 15 |
| Educational Research | 13 |
| Research Design | 13 |
| Intervention | 9 |
| Statistical Analysis | 9 |
| Validity | 8 |
| Foreign Countries | 7 |
| More ▼ | |
Source
Author
| Schochet, Peter Z. | 4 |
| Peck, Laura R. | 2 |
| Allen, Joseph P. | 1 |
| Barbosa, Debora | 1 |
| Barbosa, Jorge | 1 |
| Barnett, W. Steven | 1 |
| Barnow, Burt S. | 1 |
| Bell, Stephen | 1 |
| Berends, Mark | 1 |
| Borman, Geoffrey D. | 1 |
| Boruch, Robert | 1 |
| More ▼ | |
Publication Type
Education Level
| Elementary Education | 6 |
| Higher Education | 5 |
| Postsecondary Education | 4 |
| Early Childhood Education | 2 |
| Elementary Secondary Education | 1 |
| High Schools | 1 |
| Kindergarten | 1 |
| Secondary Education | 1 |
Audience
| Policymakers | 2 |
| Administrators | 1 |
| Practitioners | 1 |
Location
| Connecticut | 3 |
| Germany | 2 |
| New York | 2 |
| Tennessee | 2 |
| Brazil | 1 |
| Denmark | 1 |
| Florida | 1 |
| Greece (Athens) | 1 |
| Indiana | 1 |
| Iowa | 1 |
| Maine | 1 |
| More ▼ | |
Laws, Policies, & Programs
| Adoption Assistance and Child… | 1 |
| Adoption and Safe Families… | 1 |
| Comprehensive Employment and… | 1 |
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Comprehensive Tests of Basic… | 1 |
| Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Sidsel Lond Grosen; Kasper Edwards – Journal of Workplace Learning, 2024
Purpose: The aim of this paper is to explore how the involvement of workplace teams in experimenting with changes in their work practices through short, time-boxed, experiments (STBEs) can support organizational learning. It is explored how staffs' experiences with experimental practices give rise to shared knowledge and how this is supported by…
Descriptors: Organizational Learning, Experiments, Corporations, Foreign Countries
Shen, Zuchao; Curran, F. Chris; You, You; Splett, Joni Williams; Zhang, Huibin – Educational Evaluation and Policy Analysis, 2023
Programs that improve teaching effectiveness represent a core strategy to improve student educational outcomes and close student achievement gaps. This article compiles empirical values of intraclass correlations for designing effective and efficient experimental studies evaluating the effects of these programs. The Early Childhood Longitudinal…
Descriptors: Children, Longitudinal Studies, Surveys, Teacher Empowerment
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Goldhaber, Dan; Koedel, Cory – American Educational Research Journal, 2019
In the summer of 2013, the National Council on Teacher Quality (NCTQ) issued public ratings of teacher education programs. We provide the first empirical examination of NCTQ ratings, beginning with a descriptive overview of the ratings and how they evolved from 2013--2016. We also report on results from an information experiment built around the…
Descriptors: Accountability, Teacher Education Programs, Intervention, Program Evaluation
White, Mark C.; Rowan, Brian; Hansen, Ben; Lycurgus, Timothy – Journal of Research on Educational Effectiveness, 2019
There is growing pressure to make efficacy experiments more useful. This requires attending to the twin goals of generalizing experimental results to those schools that will use the results and testing the intervention's theory of action. We show how electronic records, created naturally during the daily operation of technology-based…
Descriptors: Program Evaluation, Generalization, Experiments, Records (Forms)
Barbosa, Jorge; Barbosa, Debora; Rabello, Solon – International Journal on E-Learning, 2016
Use of mobile devices and widespread adoption of wireless networks have enabled the emergence of Ubiquitous Computing. Application of this technology to improving education strategies gave rise to Ubiquitous e-Learning, also known as Ubiquitous Learning. There are several approaches to organizing ubiquitous learning environments, but most of them…
Descriptors: Foreign Countries, Electronic Learning, Models, Cooperative Learning
Dong, Nianbo; Lipsey, Mark – Society for Research on Educational Effectiveness, 2014
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
Descriptors: Educational Research, Benchmarking, Statistical Analysis, Computation
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Kautz, Tim; Schochet, Peter Z.; Tilley, Charles – National Center for Education Evaluation and Regional Assistance, 2017
A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Imbens and Rubin, 2015; Schochet, 2015, 2016). The estimators are derived using the building blocks of experimental designs with minimal assumptions, and are unbiased and normally distributed in large samples…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Miller, Robert James; Goddard, Roger D.; Kim, Minjung; Jacob, Robin; Goddard, Yvonne; Schroeder, Patricia – Educational Administration Quarterly, 2016
Purpose: This multiyear experimental study was designed to examine (1) the causal impact of McREL International's Balanced Leadership® Professional Development (BLPD) program on school principals' learning, beliefs, and behaviors and (2) whether there were differences in the types of outcomes the professional development influenced. Outcomes…
Descriptors: Faculty Development, Instructional Leadership, Principals, Leadership Training
Kurtz, Kenneth J.; Levering, Kimery R.; Stanton, Roger D.; Romero, Joshua; Morris, Steven N. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2013
The findings of Shepard, Hovland, and Jenkins (1961) on the relative ease of learning 6 elemental types of 2-way classifications have been deeply influential 2 times over: 1st, as a rebuke to pure stimulus generalization accounts, and again as the leading benchmark for evaluating formal models of human category learning. The litmus test for models…
Descriptors: Stimuli, Program Evaluation, Stimulus Generalization, Experiments
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Sørlie, Mari-Anne; Ogden, Terje – International Journal of School & Educational Psychology, 2014
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Descriptors: Validity, Research Design, Experiments, Prevention
Jaciw, Andrew; Newman, Denis – Society for Research on Educational Effectiveness, 2011
The purpose of the current work is to apply several main principles of the causal explanatory approach for establishing external validity to the experimental arena. By spanning the paradigm of the experimental approach and the school of program evaluation founded by Lee Cronbach and colleagues, the authors address the question of how research…
Descriptors: Validity, Experiments, Research Methodology, Generalization

Peer reviewed
Direct link
