NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20251
Since 20243
Since 2021 (last 5 years)17
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal…1
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dahlia K. Remler; Gregg G. Van Ryzin – American Journal of Evaluation, 2025
This article reviews the origins and use of the terms quasi-experiment and natural experiment. It demonstrates how the terms conflate whether variation in the independent variable of interest falls short of random with whether researchers find, rather than intervene to create, that variation. Using the lens of assignment--the process driving…
Descriptors: Quasiexperimental Design, Research Design, Experiments, Predictor Variables
Fangxing Bai – ProQuest LLC, 2024
Mediation analyses are crucial for understanding the mechanisms through which interventions or theoretical constructs influence outcomes within nested structures, commonly found in education, psychology, and sociology. Recognizing the importance of these effects, designing and analyzing robust studies to detect them is essential. This dissertation…
Descriptors: Research Design, Experiments, Educational Research, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zuchao Shen; Ben Kelcey – Society for Research on Educational Effectiveness, 2023
I. Purpose of the Study: Detecting whether interventions work or not (through main effect analysis) can provide empirical evidence regarding the causal linkage between malleable factors (e.g., interventions) and learner outcomes. In complement, moderation analyses help delineate for whom and under what conditions intervention effects are most…
Descriptors: Intervention, Program Effectiveness, Evidence, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Balfe, Catherine; Button, Patrick; Penn, Mary; Schwegman, David J. – Field Methods, 2023
Audit correspondence studies are field experiments that test for discriminatory behavior in active markets. Researchers measure discrimination by comparing how responsive individuals ("audited units") are to correspondences from different types of people. This article elaborates on the tradeoffs researchers face between sending audited…
Descriptors: Field Studies, Experiments, Audits (Verification), Researchers
Peer reviewed Peer reviewed
Direct linkDirect link
Verónica Pérez Bentancur; Lucía Tiscornia – Sociological Methods & Research, 2024
Experimental designs in the social sciences have received increasing attention due to their power to produce causal inferences. Nevertheless, experimental research faces limitations, including limited external validity and unrealistic treatments. We propose combining qualitative fieldwork and experimental design iteratively--moving back-and-forth…
Descriptors: Research Design, Social Science Research, Public Opinion, Punishment
Peer reviewed Peer reviewed
Direct linkDirect link
Shimonovich, Michal; Pearce, Anna; Thomson, Hilary; Katikireddi, Srinivasa Vittal – Research Synthesis Methods, 2022
In fields (such as population health) where randomised trials are often lacking, systematic reviews (SRs) can harness diversity in study design, settings and populations to assess the evidence for a putative causal relationship. SRs may incorporate causal assessment approaches (CAAs), sometimes called 'causal reviews', but there is currently no…
Descriptors: Evidence, Synthesis, Causal Models, Public Health
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Byiers, Breanne J.; Pennington, Brittany; Rudolph, Brenna N.; Ford, Andrea L. B. – Journal of Behavioral Education, 2021
Single-case experimental designs (SCEDs) are a useful tool for evaluating the effects of interventions at an individual level and can play an important role in the development and validation of evidence-based practices. Historically, researchers relied on visual analysis of SCED data and eschewed statistical approaches. Although researchers…
Descriptors: Statistical Analysis, Research Design, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Mathôt, Sebastiaan; March, Jennifer – Language Learning, 2022
In this Methods Showcase Article, we outline a workflow for running behavioral experiments online, with a focus on experiments that rely on presentation of complex stimuli and measurement of reaction times, which includes many psycholinguistic experiments. The workflow that we describe here relies on three tools: OpenSesame/OSWeb (open source)…
Descriptors: Behavioral Science Research, Experiments, Psycholinguistics, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Pyott, Laura – Journal of Statistics and Data Science Education, 2021
Understanding the abstract principles of statistical experimental design can challenge undergraduate students, especially when learned in a lecture setting. This article presents a concrete and easily replicated example of experimental design principles in action through a hands-on learning activity for students enrolled in an experimental design…
Descriptors: Statistics Education, Research Design, Undergraduate Students, Active Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Tanious, René; Fernández-Castilla, Belén – Journal of Applied Behavior Analysis, 2022
In science in general and in the context of single-case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been…
Descriptors: Intervention, Behavioral Science Research, Replication (Evaluation), Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Edward; Gagnon-Bartsch, Johann A. – Journal of Educational and Behavioral Statistics, 2021
In paired experiments, participants are grouped into pairs with similar characteristics, and one observation from each pair is randomly assigned to treatment. The resulting treatment and control groups should be well-balanced; however, there may still be small chance imbalances. Building on work for completely randomized experiments, we propose a…
Descriptors: Experiments, Groups, Research Design, Statistical Analysis
Benjamin A. Motz; Öykü Üner; Harmony E. Jankowski; Marcus A. Christie; Kim Burgas; Diego del Blanco Orobitg; Mark A. McDaniel – Grantee Submission, 2023
For researchers seeking to improve education, a common goal is to identify teaching practices that have causal benefits in classroom settings. To test whether an instructional practice exerts a causal influence on an outcome measure, the most straightforward and compelling method is to conduct an experiment. While experimentation is common in…
Descriptors: Learning Analytics, Experiments, Learning Processes, Learning Management Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Previous Page | Next Page »
Pages: 1  |  2