Publication Date
In 2025 | 1 |
Since 2024 | 7 |
Descriptor
Observation | 7 |
Research Design | 7 |
Statistical Inference | 4 |
Causal Models | 2 |
Computation | 2 |
Educational Research | 2 |
Effect Size | 2 |
Intervention | 2 |
Models | 2 |
Multivariate Analysis | 2 |
Social Science Research | 2 |
More ▼ |
Source
Grantee Submission | 4 |
Journal of Educational and… | 1 |
Journal of Research on… | 1 |
Sociological Methods &… | 1 |
Author
Lindsay Page | 3 |
Luke Keele | 3 |
Adelle K. Sturgell | 1 |
Ann C. Jolly | 1 |
Austin van Loon | 1 |
David A. Klingbeil | 1 |
David Broska | 1 |
Eli Ben-Michael | 1 |
Ethan R. Van Norman | 1 |
Heather H. Aiken | 1 |
Kenneth A. Frank | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 3 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Eli Ben-Michael; Lindsay Page; Luke Keele – Grantee Submission, 2024
In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. We develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights, a generalization of inverse propensity score weights that solve a convex optimization problem…
Descriptors: Research Design, Statistical Data, Multivariate Analysis, Observation
David Broska; Michael Howes; Austin van Loon – Sociological Methods & Research, 2025
Large language models (LLMs) provide cost-effective but possibly inaccurate predictions of human behavior. Despite growing evidence that predicted and observed behavior are often not "interchangeable," there is limited guidance on using LLMs to obtain valid estimates of causal effects and other parameters. We argue that LLM predictions…
Descriptors: Artificial Intelligence, Observation, Prediction, Correlation
Luke Keele; Matthew Lenard; Lindsay Page – Journal of Research on Educational Effectiveness, 2024
In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the…
Descriptors: Intervention, Cluster Grouping, Observation, Catholic Schools
Ann C. Jolly; Kristen D. Beach; Heather H. Aiken; Steven J. Amendum – Journal of Educational and Psychological Consultation, 2024
The field of education relies heavily on instructional coaches to build teacher capacity in the implementation of evidence-based practices (EBPs). Although observation tools are commonly used to measure the fidelity of implementation by teachers, fewer tools are available to identify specific coaching behaviors used during in situ coaching…
Descriptors: Coaching (Performance), Observation, Research Tools, Reliability
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Ting Ye; Ted Westling; Lindsay Page; Luke Keele – Grantee Submission, 2024
The clustered observational study (COS) design is the observational study counterpart to the clustered randomized trial. In a COS, a treatment is assigned to intact groups, and all units within the group are exposed to the treatment. However, the treatment is non-randomly assigned. COSs are common in both education and health services research. In…
Descriptors: Nonparametric Statistics, Identification, Causal Models, Multivariate Analysis

Kenneth A. Frank; Qinyun Lin; Spiro J. Maroulis – Grantee Submission, 2024
In the complex world of educational policy, causal inferences will be debated. As we review non-experimental designs in educational policy, we focus on how to clarify and focus the terms of debate. We begin by presenting the potential outcomes/counterfactual framework and then describe approximations to the counterfactual generated from the…
Descriptors: Causal Models, Statistical Inference, Observation, Educational Policy