NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We consider a class of multiple-group individually-randomized group trials (IRGTs) that introduces a (partially) cross-classified structure in the treatment condition (only). The novel feature of this design is that the nature of the treatment induces a clustering structure that involves two or more non-nested groups among individuals in the…
Descriptors: Randomized Controlled Trials, Research Design, Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
David Broska; Michael Howes; Austin van Loon – Sociological Methods & Research, 2025
Large language models (LLMs) provide cost-effective but possibly inaccurate predictions of human behavior. Despite growing evidence that predicted and observed behavior are often not "interchangeable," there is limited guidance on using LLMs to obtain valid estimates of causal effects and other parameters. We argue that LLM predictions…
Descriptors: Artificial Intelligence, Observation, Prediction, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan; Jimin Oh; Chen Li; Jiexuan Huang; Yeran Tong – Society for Research on Educational Effectiveness, 2023
Background: The generalizability of a study's results continues to be at the forefront of concerns in evaluation research in education (Tipton & Olsen, 2018). Over the past decade, statisticians have developed methods, mainly based on propensity scores, to improve generalizations in the absence of random sampling (Stuart et al., 2011; Tipton,…
Descriptors: Generalizability Theory, Probability, Scores, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Diego Cortes; Dirk Hastedt; Sabine Meinck – Large-scale Assessments in Education, 2025
This paper informs users of data collected in international large-scale assessments (ILSA), by presenting argumentsunderlining the importance of considering two design features employed in these studies. We examine a commonmisconception stating that the uncertainty arising from the assessment design is negligible compared with that arisingfrom the…
Descriptors: Sampling, Research Design, Educational Assessment, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Duane Knudson – Measurement in Physical Education and Exercise Science, 2025
Small sample sizes contribute to several problems in research and knowledge advancement. This conceptual replication study confirmed and extended the inflation of type II errors and confidence intervals in correlation analyses of small sample sizes common in kinesiology/exercise science. Current population data (N = 18, 230, & 464) on four…
Descriptors: Kinesiology, Exercise, Biomechanics, Movement Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Ye; Ted Westling; Lindsay Page; Luke Keele – Grantee Submission, 2024
The clustered observational study (COS) design is the observational study counterpart to the clustered randomized trial. In a COS, a treatment is assigned to intact groups, and all units within the group are exposed to the treatment. However, the treatment is non-randomly assigned. COSs are common in both education and health services research. In…
Descriptors: Nonparametric Statistics, Identification, Causal Models, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas Cook; Mansi Wadhwa; Jingwen Zheng – Society for Research on Educational Effectiveness, 2023
Context: A perennial problem in applied statistics is the inability to justify strong claims about cause-and-effect relationships without full knowledge of the mechanism determining selection into treatment. Few research designs other than the well-implemented random assignment study meet this requirement. Researchers have proposed partial…
Descriptors: Observation, Research Design, Causal Models, Computation
Wendy Chan; Larry Vernon Hedges – Journal of Educational and Behavioral Statistics, 2022
Multisite field experiments using the (generalized) randomized block design that assign treatments to individuals within sites are common in education and the social sciences. Under this design, there are two possible estimands of interest and they differ based on whether sites or blocks have fixed or random effects. When the average treatment…
Descriptors: Research Design, Educational Research, Statistical Analysis, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Duy Pham; Kirk Vanacore; Adam Sales; Johann Gagnon-Bartsch – Society for Research on Educational Effectiveness, 2024
Background: Education researchers typically estimate average program effects with regression; if they are interested in heterogeneous effects, they include an interaction in the model. Such models quantify and infer the influences of each covariate on the effect via interaction coefficients and their associated p-values or confidence intervals.…
Descriptors: Educational Research, Educational Researchers, Regression (Statistics), Artificial Intelligence
Qinyun Lin; Amy K. Nuttall; Qian Zhang; Kenneth A. Frank – Grantee Submission, 2023
Empirical studies often demonstrate multiple causal mechanisms potentially involving simultaneous or causally related mediators. However, researchers often use simple mediation models to understand the processes because they do not or cannot measure other theoretically relevant mediators. In such cases, another potentially relevant but unobserved…
Descriptors: Causal Models, Mediation Theory, Error of Measurement, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Xu Qin – Grantee Submission, 2023
When designing a study for causal mediation analysis, it is crucial to conduct a power analysis to determine the sample size required to detect the causal mediation effects with sufficient power. However, the development of power analysis methods for causal mediation analysis has lagged far behind. To fill the knowledge gap, I proposed a…
Descriptors: Sample Size, Statistical Analysis, Causal Models, Mediation Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Menglin; Logan, Jessica A. R. – Journal of Research on Educational Effectiveness, 2021
Planned missing data designs allow researchers to have highly-powered studies by testing only a fraction of the traditional sample size. In two-method measurement planned missingness designs, researchers assess only part of the sample on a high-quality expensive measure, while the entire sample is given a more inexpensive, but biased measure. The…
Descriptors: Longitudinal Studies, Research Design, Research Problems, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Haynes-Brown, Tashane K. – Journal of Mixed Methods Research, 2023
The purpose of this article is to illustrate the dynamic process involved in developing and utilizing a theoretical model in a mixed methods study. Specifically, I illustrate how the theoretical model can serve as the starting point in framing the study, as a lens for guiding the data collection and analysis, and as the end point in explaining the…
Descriptors: Theories, Models, Mixed Methods Research, Teacher Attitudes
Hughes, Katherine L.; Miller, Trey; Reese, Kelly – Grantee Submission, 2021
This report from the Career and Technical Education (CTE) Research Network Lead team provides final results from an evaluability assessment of CTE programs that feasibly could be evaluated using a rigorous experimental design. Evaluability assessments (also called feasibility studies) are used in education and other fields, such as international…
Descriptors: Program Evaluation, Vocational Education, Evaluation Methods, Educational Research
Previous Page | Next Page ยป
Pages: 1  |  2  |  3