NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 136 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dahlia K. Remler; Gregg G. Van Ryzin – American Journal of Evaluation, 2025
This article reviews the origins and use of the terms quasi-experiment and natural experiment. It demonstrates how the terms conflate whether variation in the independent variable of interest falls short of random with whether researchers find, rather than intervene to create, that variation. Using the lens of assignment--the process driving…
Descriptors: Quasiexperimental Design, Research Design, Experiments, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Balfe, Catherine; Button, Patrick; Penn, Mary; Schwegman, David J. – Field Methods, 2023
Audit correspondence studies are field experiments that test for discriminatory behavior in active markets. Researchers measure discrimination by comparing how responsive individuals ("audited units") are to correspondences from different types of people. This article elaborates on the tradeoffs researchers face between sending audited…
Descriptors: Field Studies, Experiments, Audits (Verification), Researchers
Peer reviewed Peer reviewed
Direct linkDirect link
Verónica Pérez Bentancur; Lucía Tiscornia – Sociological Methods & Research, 2024
Experimental designs in the social sciences have received increasing attention due to their power to produce causal inferences. Nevertheless, experimental research faces limitations, including limited external validity and unrealistic treatments. We propose combining qualitative fieldwork and experimental design iteratively--moving back-and-forth…
Descriptors: Research Design, Social Science Research, Public Opinion, Punishment
Peer reviewed Peer reviewed
Direct linkDirect link
Shimonovich, Michal; Pearce, Anna; Thomson, Hilary; Katikireddi, Srinivasa Vittal – Research Synthesis Methods, 2022
In fields (such as population health) where randomised trials are often lacking, systematic reviews (SRs) can harness diversity in study design, settings and populations to assess the evidence for a putative causal relationship. SRs may incorporate causal assessment approaches (CAAs), sometimes called 'causal reviews', but there is currently no…
Descriptors: Evidence, Synthesis, Causal Models, Public Health
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Byiers, Breanne J.; Pennington, Brittany; Rudolph, Brenna N.; Ford, Andrea L. B. – Journal of Behavioral Education, 2021
Single-case experimental designs (SCEDs) are a useful tool for evaluating the effects of interventions at an individual level and can play an important role in the development and validation of evidence-based practices. Historically, researchers relied on visual analysis of SCED data and eschewed statistical approaches. Although researchers…
Descriptors: Statistical Analysis, Research Design, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta – Journal of Experimental Education, 2020
Changing criterion designs (CCD) are single-case experimental designs that entail a step-by-step approximation of the final level desired for a target behavior. Following a recent review on the desirable methodological features of CCDs, the current text focuses on an analytical challenge: the definition of an objective rule for assessing the…
Descriptors: Research Design, Research Methodology, Data Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Mathôt, Sebastiaan; March, Jennifer – Language Learning, 2022
In this Methods Showcase Article, we outline a workflow for running behavioral experiments online, with a focus on experiments that rely on presentation of complex stimuli and measurement of reaction times, which includes many psycholinguistic experiments. The workflow that we describe here relies on three tools: OpenSesame/OSWeb (open source)…
Descriptors: Behavioral Science Research, Experiments, Psycholinguistics, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Pyott, Laura – Journal of Statistics and Data Science Education, 2021
Understanding the abstract principles of statistical experimental design can challenge undergraduate students, especially when learned in a lecture setting. This article presents a concrete and easily replicated example of experimental design principles in action through a hands-on learning activity for students enrolled in an experimental design…
Descriptors: Statistics Education, Research Design, Undergraduate Students, Active Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Leatherdale, Scott T. – International Journal of Social Research Methodology, 2019
In particular research domains, the randomized control trial (RCT) is considered to be the only means for obtaining reliable estimates of the true impact of an intervention. However, an RCT design would often not be considered ethical, politically feasible, or appropriate for evaluating the impact of many policy, programme, or structural changes…
Descriptors: Experiments, Research Methodology, Research Design, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Tanious, René; Fernández-Castilla, Belén – Journal of Applied Behavior Analysis, 2022
In science in general and in the context of single-case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been…
Descriptors: Intervention, Behavioral Science Research, Replication (Evaluation), Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Edward; Gagnon-Bartsch, Johann A. – Journal of Educational and Behavioral Statistics, 2021
In paired experiments, participants are grouped into pairs with similar characteristics, and one observation from each pair is randomly assigned to treatment. The resulting treatment and control groups should be well-balanced; however, there may still be small chance imbalances. Building on work for completely randomized experiments, we propose a…
Descriptors: Experiments, Groups, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, David C.; Bell, Stephen H.; Hoffman, Denise; Wood, Michelle – American Journal of Evaluation, 2020
The Benefit Offset National Demonstration (BOND) tested a $1 reduction in benefits per $2 earnings increase above the level at which Social Security Disability Insurance benefits drop from full to zero under current law. BOND included a rare and large "population-representative" experiment: It applied the rule to a nationwide, random…
Descriptors: Federal Programs, Public Policy, Experiments, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Curran, F. Chris; You, You; Splett, Joni Williams; Zhang, Huibin – Educational Evaluation and Policy Analysis, 2023
Programs that improve teaching effectiveness represent a core strategy to improve student educational outcomes and close student achievement gaps. This article compiles empirical values of intraclass correlations for designing effective and efficient experimental studies evaluating the effects of these programs. The Early Childhood Longitudinal…
Descriptors: Children, Longitudinal Studies, Surveys, Teacher Empowerment
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10