Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Society for Research on… | 12 |
Author
Steiner, Peter M. | 2 |
Bernstein, Larry | 1 |
Borman, Geoffrey | 1 |
Boulay, Beth | 1 |
Caverly, Sarah | 1 |
Cordray, David S. | 1 |
Darrow, Catherine L. | 1 |
Duja Michael | 1 |
Edmunds, Julie | 1 |
Fellers, Lauren | 1 |
Fredrickson, Mark M. | 1 |
More ▼ |
Publication Type
Reports - Research | 12 |
Education Level
Audience
Policymakers | 1 |
Practitioners | 1 |
Location
Lebanon | 1 |
North Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wendy Chan; Jimin Oh; Katherine Wilson – Society for Research on Educational Effectiveness, 2022
Background: Over the past decade, research on the development and assessment of tools to improve the generalizability of experimental findings has grown extensively (Tipton & Olsen, 2018). However, many experimental studies in education are based on small samples, which may include 30-70 schools while inference populations to which…
Descriptors: Educational Research, Research Problems, Sample Size, Research Methodology
Kate Schwartz; Lina Torossian; Duja Michael; Jamile Youssef; Hiro Yoshikawa; Somaia Razzak; Katie Murphy – Society for Research on Educational Effectiveness, 2023
Background/Context: The COVID-19 pandemic challenged the way we conduct research. For some modes of data collection, such as interviews, there was a ready (if not perfect) analog: face-to-face became phone-based; paper and pen surveys moved online. Others, such as direct assessments of child development, proved more challenging. Despite the…
Descriptors: Child Development, COVID-19, Pandemics, Research Methodology
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Yu, Bing – Society for Research on Educational Effectiveness, 2013
Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…
Descriptors: Evaluation Methods, Bias, Research Methodology, Scores
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Descriptors: Site Selection, Randomized Controlled Trials, Educational Experiments, Research Methodology
Boulay, Beth; Martin, Carlos; Zief, Susan; Granger, Robert – Society for Research on Educational Effectiveness, 2013
Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evaluations being conducted as part of four…
Descriptors: Program Evaluation, Evaluation Methods, Research, Evaluators
Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M. – Society for Research on Educational Effectiveness, 2013
Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…
Descriptors: Probability, Scores, Statistical Analysis, Statistical Bias
Hansen, Ben B.; Fredrickson, Mark M. – Society for Research on Educational Effectiveness, 2014
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Descriptors: Research Methodology, Quasiexperimental Design, Evaluation Methods, Comparative Analysis
Pigott, Therese D.; Williams, Ryan T.; Polanin, Joshua R.; Wu-Bohanon, Meng-Jia – Society for Research on Educational Effectiveness, 2012
The purpose of this research to investigate the heterogeneity of per-pupil expenditure (PPE) slope estimates in predicting student achievement. The research question guiding this project is: how does the measured relationship between per-pupil expenditure vary across studies that use different models? In concert with SREE's 2012 conference mission…
Descriptors: Productivity, Expenditures, Academic Achievement, Regression (Statistics)
Reardon, Sean F. – Society for Research on Educational Effectiveness, 2010
Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…
Descriptors: Social Science Research, Least Squares Statistics, Computation, Correlation
Nelson, Michael C.; Cordray, David S.; Hulleman, Chris S.; Darrow, Catherine L.; Sommer, Evan C. – Society for Research on Educational Effectiveness, 2010
An educational intervention's effectiveness is judged by whether it produces positive outcomes for students, with the randomized controlled trial (CRT) as a valuable tool for determining intervention effects. However, the intervention-as-implemented in an experiment frequently differs from the intervention-as-designed, making it unclear whether…
Descriptors: Intervention, Program Effectiveness, Demonstration Programs, Experimental Programs
Unlu, Fatih; Yamaguchi, Ryoko; Bernstein, Larry; Edmunds, Julie – Society for Research on Educational Effectiveness, 2010
This paper addresses methodological issues arising from an experimental study of North Carolina's Early College High School Initiative, a four-year longitudinal experimental study funded by Institute for Education Sciences. North Carolina implemented the Early College High School (ECHS) Initiative in response to low high school graduation rates.…
Descriptors: Control Groups, High School Students, Graduation Rate, Course Selection (Students)