NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 57 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sidsel Lond Grosen; Kasper Edwards – Journal of Workplace Learning, 2024
Purpose: The aim of this paper is to explore how the involvement of workplace teams in experimenting with changes in their work practices through short, time-boxed, experiments (STBEs) can support organizational learning. It is explored how staffs' experiences with experimental practices give rise to shared knowledge and how this is supported by…
Descriptors: Organizational Learning, Experiments, Corporations, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Curran, F. Chris; You, You; Splett, Joni Williams; Zhang, Huibin – Educational Evaluation and Policy Analysis, 2023
Programs that improve teaching effectiveness represent a core strategy to improve student educational outcomes and close student achievement gaps. This article compiles empirical values of intraclass correlations for designing effective and efficient experimental studies evaluating the effects of these programs. The Early Childhood Longitudinal…
Descriptors: Children, Longitudinal Studies, Surveys, Teacher Empowerment
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Goldhaber, Dan; Koedel, Cory – American Educational Research Journal, 2019
In the summer of 2013, the National Council on Teacher Quality (NCTQ) issued public ratings of teacher education programs. We provide the first empirical examination of NCTQ ratings, beginning with a descriptive overview of the ratings and how they evolved from 2013--2016. We also report on results from an information experiment built around the…
Descriptors: Accountability, Teacher Education Programs, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
White, Mark C.; Rowan, Brian; Hansen, Ben; Lycurgus, Timothy – Journal of Research on Educational Effectiveness, 2019
There is growing pressure to make efficacy experiments more useful. This requires attending to the twin goals of generalizing experimental results to those schools that will use the results and testing the intervention's theory of action. We show how electronic records, created naturally during the daily operation of technology-based…
Descriptors: Program Evaluation, Generalization, Experiments, Records (Forms)
Peer reviewed Peer reviewed
Direct linkDirect link
Barbosa, Jorge; Barbosa, Debora; Rabello, Solon – International Journal on E-Learning, 2016
Use of mobile devices and widespread adoption of wireless networks have enabled the emergence of Ubiquitous Computing. Application of this technology to improving education strategies gave rise to Ubiquitous e-Learning, also known as Ubiquitous Learning. There are several approaches to organizing ubiquitous learning environments, but most of them…
Descriptors: Foreign Countries, Electronic Learning, Models, Cooperative Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Lipsey, Mark – Society for Research on Educational Effectiveness, 2014
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
Descriptors: Educational Research, Benchmarking, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kautz, Tim; Schochet, Peter Z.; Tilley, Charles – National Center for Education Evaluation and Regional Assistance, 2017
A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Imbens and Rubin, 2015; Schochet, 2015, 2016). The estimators are derived using the building blocks of experimental designs with minimal assumptions, and are unbiased and normally distributed in large samples…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Robert James; Goddard, Roger D.; Kim, Minjung; Jacob, Robin; Goddard, Yvonne; Schroeder, Patricia – Educational Administration Quarterly, 2016
Purpose: This multiyear experimental study was designed to examine (1) the causal impact of McREL International's Balanced Leadership® Professional Development (BLPD) program on school principals' learning, beliefs, and behaviors and (2) whether there were differences in the types of outcomes the professional development influenced. Outcomes…
Descriptors: Faculty Development, Instructional Leadership, Principals, Leadership Training
Peer reviewed Peer reviewed
Direct linkDirect link
Kurtz, Kenneth J.; Levering, Kimery R.; Stanton, Roger D.; Romero, Joshua; Morris, Steven N. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2013
The findings of Shepard, Hovland, and Jenkins (1961) on the relative ease of learning 6 elemental types of 2-way classifications have been deeply influential 2 times over: 1st, as a rebuke to pure stimulus generalization accounts, and again as the leading benchmark for evaluating formal models of human category learning. The litmus test for models…
Descriptors: Stimuli, Program Evaluation, Stimulus Generalization, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sørlie, Mari-Anne; Ogden, Terje – International Journal of School & Educational Psychology, 2014
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Descriptors: Validity, Research Design, Experiments, Prevention
Jaciw, Andrew; Newman, Denis – Society for Research on Educational Effectiveness, 2011
The purpose of the current work is to apply several main principles of the causal explanatory approach for establishing external validity to the experimental arena. By spanning the paradigm of the experimental approach and the school of program evaluation founded by Lee Cronbach and colleagues, the authors address the question of how research…
Descriptors: Validity, Experiments, Research Methodology, Generalization
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4