Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 22 |
Descriptor
Source
Author
Publication Type
Education Level
Higher Education | 7 |
Elementary Secondary Education | 4 |
Postsecondary Education | 3 |
Elementary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Preschool Education | 1 |
Secondary Education | 1 |
Two Year Colleges | 1 |
Audience
Researchers | 67 |
Practitioners | 25 |
Policymakers | 13 |
Administrators | 10 |
Teachers | 8 |
Students | 2 |
Location
United Kingdom | 3 |
New Jersey | 2 |
United Kingdom (Great Britain) | 2 |
Arkansas | 1 |
California | 1 |
Colombia | 1 |
Colorado | 1 |
Indiana | 1 |
Jordan | 1 |
Kenya | 1 |
Mali | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Philippakos, Zoi A., Ed.; Howell, Emily, Ed.; Pellegrino, Anthony, Ed. – Guilford Press, 2021
Effective research in educational settings requires collaboration between researchers and school-based practitioners to codesign instruction and assessment, analyze findings to inform subsequent iterations, and make thoughtful revisions. This innovative reference and course text examines the theory and practice of design-based research (DBR), an…
Descriptors: Educational Research, Theory Practice Relationship, Research Methodology, Intervention
Hill, Carolyn J.; Scher, Lauren; Haimson, Joshua; Granito, Kelly – National Center for Education Evaluation and Regional Assistance, 2023
Implementation analyses conducted as part of impact studies can help educators know whether a tested intervention is likely to be a good fit for their own settings. This guide can help researchers design and conduct these kinds of analyses. The guide provides steps and recommendations about ways to specify implementation research questions, assess…
Descriptors: Program Implementation, Intervention, Educational Research, Context Effect
Lee, Yen-Mei – Interactive Learning Environments, 2023
The study aimed to conduct a systematic literature review to identify the trends, impacts, and challenges of mobile microlearning (MML) research. Using five academic databases from the fields of social science, engineering, and medical science as search sources, 26 scholarly articles, published between 2015 and 2020, were retrieved. The study…
Descriptors: Educational Research, Telecommunications, Handheld Devices, Instructional Effectiveness
Zid Mancenido – Annenberg Institute for School Reform at Brown University, 2022
Many teacher education researchers have expressed concerns with the lack of rigorous impact evaluations of teacher preparation practices. I summarize these various concerns as they relate to issues of internal validity, external validity, and measurement. I then assess the prevalence of these issues by reviewing 166 impact evaluations of teacher…
Descriptors: Teacher Education, Educational Research, Program Evaluation, Validity
What Works Clearinghouse, 2021
The What Works Clearinghouse (WWC) identifies existing research on educational interventions, assesses the quality of the research, and summarizes and disseminates the evidence from studies that meet WWC standards. The WWC aims to provide enough information so educators can use the research to make informed decisions in their settings. This…
Descriptors: Program Effectiveness, Intervention, Educational Research, Educational Quality
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Foundation for Child Development, 2020
As the number of publicly funded early childhood education (ECE) programs increases, policymakers will need empirical evidence to justify the taxpayer investment. Such justification will require a stronger understanding of the essential components of an ECE program's design, as well as solid evidence on which components, or constellations of…
Descriptors: Early Childhood Education, Research Utilization, Outcomes of Education, Educational Research
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) evaluates research studies that look at the effectiveness of education programs, products, practices, and policies, which the WWC calls "interventions." Many studies of education interventions make claims about impacts on students' outcomes. Some studies have designs that enable readers to make causal…
Descriptors: Program Design, Program Development, Program Effectiveness, Program Evaluation
What Works Clearinghouse, 2015
The What Works Clearinghouse (WWC) Standards Briefs explain the rules the WWC uses to evaluate the quality of studies for practitioners, researchers, and policymakers. Attrition (loss of sample) occurs when individuals initially included in a study are not included in the final study analysis. Attrition is a common issue in education research and…
Descriptors: Program Effectiveness, Educational Research, Attrition (Research Studies), Student Attrition
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Lohr, Sharon; Schochet, Peter Z.; Sanders, Elizabeth – National Center for Education Research, 2014
Suppose an education researcher wants to test the impact of a high school drop-out prevention intervention in which at-risk students attend classes to receive intensive summer school instruction. The district will allow the researcher to randomly assign students to the treatment classes or to the control group. Half of the students (the treatment…
Descriptors: Educational Research, Research Design, Data Analysis, Intervention
Crisp, Gloria; Baker, Vicki L.; Griffin, Kimberly A.; Lunsford, Laura Gail; Pifer, Meghan J. – ASHE Higher Education Report, 2017
The overarching purpose of this monograph is to move the mentoring conversation forward by offering an updated synthesis of the undergraduate mentoring scholarship published between 2008 and 2015. The compendium of research reviewed provides practitioners and researchers with an evidence-based view of the influence of mentoring on the academic and…
Descriptors: Mentors, Undergraduate Students, Interpersonal Relationship, Educational Research
Stuart, Elizabeth A.; Olsen, Robert B.; Bell, Stephen H.; Orr, Larry L. – Society for Research on Educational Effectiveness, 2012
While there has been some increasing interest in external validity, most work to this point has been in assessing the similarity of a randomized trial sample and a population of interest (e.g., Stuart et al., 2010; Tipton, 2011). The goal of this research is to calculate empirical estimates of the external validity bias in educational intervention…
Descriptors: Validity, Bias, Computation, Outcome Measures
Porter, Stephen R. – Online Submission, 2012
Selection bias is problematic when evaluating the effects of postsecondary interventions on college students, and can lead to biased estimates of program effects. While instrumental variables can be used to account for endogeneity due to self-selection, current practice requires that all five assumptions of instrumental variables be met in order…
Descriptors: Statistical Bias, College Students, Educational Research, Statistical Analysis