Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
Society for Research on… | 6 |
Journal of Educational and… | 2 |
American Journal of Evaluation | 1 |
Educational Researcher | 1 |
Grantee Submission | 1 |
National Center for Education… | 1 |
Psychological Bulletin | 1 |
Author
Tipton, Elizabeth | 13 |
Matlen, Bryan J. | 2 |
Olsen, Robert B. | 2 |
Alden, Alison R. | 1 |
Borman, Geoffrey | 1 |
Caverly, Sarah | 1 |
Chan, Wendy | 1 |
Hallberg, Kelly | 1 |
Hand, Linda L. | 1 |
Hedges, Larry | 1 |
Hedges, Larry V. | 1 |
More ▼ |
Publication Type
Reports - Research | 8 |
Journal Articles | 6 |
Reports - Descriptive | 3 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 2 |
Location
Texas | 3 |
California | 2 |
Indiana | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Indiana Statewide Testing for… | 1 |
What Works Clearinghouse Rating
Tipton, Elizabeth; Matlen, Bryan J. – American Journal of Evaluation, 2019
Randomized control trials (RCTs) have long been considered the "gold standard" for evaluating the impacts of interventions. However, in most education RCTs, the sample of schools included is recruited based on convenience, potentially compromising a study's ability to generalize to an intended population. An alternative approach is to…
Descriptors: Randomized Controlled Trials, Recruitment, Educational Research, Generalization
Tipton, Elizabeth; Matlen, Bryan J. – Grantee Submission, 2019
Randomized control trials (RCTs) have long been considered the "gold standard" for evaluating the impacts of interventions. However, in most education RCTs, the sample of schools included is recruited based on convenience, potentially compromising a study's ability to generalize to an intended population. An alternative approach is to…
Descriptors: Randomized Controlled Trials, Recruitment, Educational Research, Generalization
Tipton, Elizabeth; Olsen, Robert B. – National Center for Education Evaluation and Regional Assistance, 2022
This guide will help researchers design and implement impact studies in education so that the findings are more generalizable to the study's target population. Guidance is provided on key steps that researchers can take, including defining the target population, selecting a sample of schools--and replacement schools, when needed--managing school…
Descriptors: Outcome Measures, Evaluators, Educational Researchers, Educational Research
Tipton, Elizabeth; Olsen, Robert B. – Educational Researcher, 2018
School-based evaluations of interventions are increasingly common in education research. Ideally, the results of these evaluations are used to make evidence-based policy decisions for students. However, it is difficult to make generalizations from these evaluations because the types of schools included in the studies are typically not selected…
Descriptors: Intervention, Educational Research, Decision Making, Evidence Based Practice
Tipton, Elizabeth – Journal of Educational and Behavioral Statistics, 2014
Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference population of interest. In this article, we provide a generalizability index that can be used to assess the degree of similarity between the sample of…
Descriptors: Experiments, Comparative Analysis, Experimental Groups, Generalization
Tipton, Elizabeth – Society for Research on Educational Effectiveness, 2012
The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…
Descriptors: Sampling, Experiments, Statistical Studies, Generalization
Tipton, Elizabeth; Yeager, David; Iachan, Ronaldo – Society for Research on Educational Effectiveness, 2016
Questions regarding the generalizability of results from educational experiments have been at the forefront of methods development over the past five years. This work has focused on methods for estimating the effect of an intervention in a well-defined inference population (e.g., Tipton, 2013; O'Muircheartaigh and Hedges, 2014); methods for…
Descriptors: Behavioral Sciences, Behavioral Science Research, Intervention, Educational Experiments
Tipton, Elizabeth – Society for Research on Educational Effectiveness, 2011
The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…
Descriptors: Probability, Statistical Analysis, Experiments, Generalization
Tipton, Elizabeth – Journal of Educational and Behavioral Statistics, 2013
As a result of the use of random assignment to treatment, randomized experiments typically have high internal validity. However, units are very rarely randomly selected from a well-defined population of interest into an experiment; this results in low external validity. Under nonrandom sampling, this means that the estimate of the sample average…
Descriptors: Generalization, Experiments, Classification, Computation
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy – Society for Research on Educational Effectiveness, 2015
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Descriptors: Generalization, Program Effectiveness, Sample Size, Computation
Tipton, Elizabeth – Society for Research on Educational Effectiveness, 2013
Recent research on the design of social experiments has highlighted the effects of different design choices on research findings. Since experiments rarely collect their samples using random selection, in order to address these external validity problems and design choices, recent research has focused on two areas. The first area is on methods for…
Descriptors: Experiments, Research Methodology, Middle Schools, Secondary School Mathematics
Tipton, Elizabeth; Sullivan, Kate; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Caverly, Sarah – Society for Research on Educational Effectiveness, 2011
In this paper the authors present a new method for sample selection for scale-up experiments. This method uses propensity score matching methods to create a sample that is similar in composition to a well-defined generalization population. The method they present is flexible and practical in the sense that it identifies units to be targeted for…
Descriptors: Sampling, Selection, Research Methodology, Reading Programs
Uttal, David H.; Meadow, Nathaniel G.; Tipton, Elizabeth; Hand, Linda L.; Alden, Alison R.; Warren, Christopher; Newcombe, Nora S. – Psychological Bulletin, 2013
Having good spatial skills strongly predicts achievement and attainment in science, technology, engineering, and mathematics fields (e.g., Shea, Lubinski, & Benbow, 2001; Wai, Lubinski, & Benbow, 2009). Improving spatial skills is therefore of both theoretical and practical importance. To determine whether and to what extent training and…
Descriptors: Spatial Ability, Skill Development, Control Groups, Graduate Students