NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We consider a class of multiple-group individually-randomized group trials (IRGTs) that introduces a (partially) cross-classified structure in the treatment condition (only). The novel feature of this design is that the nature of the treatment induces a clustering structure that involves two or more non-nested groups among individuals in the…
Descriptors: Randomized Controlled Trials, Research Design, Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kaltsonoudi, Kalliope; Tsigilis, Nikolaos; Karteroliotis, Konstantinos – Measurement in Physical Education and Exercise Science, 2022
Common method variance refers to the amount of uncontrolled systematic error leading to biased estimates of scale reliability and validity and to spurious covariance shared among variables due to common method and/or common source employed in survey-based researches. As the extended use of self-report questionnaires is inevitable, numerous studies…
Descriptors: Athletics, Research, Research Methodology, Error of Measurement
Wendy Chan; Larry Vernon Hedges – Journal of Educational and Behavioral Statistics, 2022
Multisite field experiments using the (generalized) randomized block design that assign treatments to individuals within sites are common in education and the social sciences. Under this design, there are two possible estimands of interest and they differ based on whether sites or blocks have fixed or random effects. When the average treatment…
Descriptors: Research Design, Educational Research, Statistical Analysis, Statistical Inference
Qinyun Lin; Amy K. Nuttall; Qian Zhang; Kenneth A. Frank – Grantee Submission, 2023
Empirical studies often demonstrate multiple causal mechanisms potentially involving simultaneous or causally related mediators. However, researchers often use simple mediation models to understand the processes because they do not or cannot measure other theoretically relevant mediators. In such cases, another potentially relevant but unobserved…
Descriptors: Causal Models, Mediation Theory, Error of Measurement, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Jianjun; Ma, Xin – Athens Journal of Education, 2019
This rejoinder keeps the original focus on statistical computing pertaining to the correlation of student achievement between mathematics and science from the Trend in Mathematics and Science Study (TIMSS). Albeit the availability of student performance data in TIMSS and the emphasis of the inter-subject connection in the Next Generation Science…
Descriptors: Scores, Correlation, Achievement Tests, Elementary Secondary Education
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ryan, Wendy L.; St. Iago-McRae, Ezry – Bioscene: Journal of College Biology Teaching, 2016
Experimentation is the foundation of science and an important process for students to understand and experience. However, it can be difficult to teach some aspects of experimentation within the time and resource constraints of an academic semester. Interactive models can be a useful tool in bridging this gap. This freely accessible simulation…
Descriptors: Research Design, Simulation, Animals, Animal Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
VanHoudnos, Nathan M.; Greenhouse, Joel B. – Journal of Educational and Behavioral Statistics, 2016
When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Error of Measurement, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin – Society for Research on Educational Effectiveness, 2015
Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…
Descriptors: Pretests Posttests, Statistical Bias, Accuracy, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2014
One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time…
Descriptors: Hierarchical Linear Modeling, Monte Carlo Methods, Case Studies, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4