Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 68 |
Descriptor
Evaluation Methods | 417 |
Program Evaluation | 417 |
Research Design | 417 |
Research Methodology | 144 |
Evaluation Criteria | 78 |
Research Problems | 72 |
Data Collection | 71 |
Program Effectiveness | 69 |
Elementary Secondary Education | 66 |
Models | 65 |
Data Analysis | 56 |
More ▼ |
Source
Author
Stufflebeam, Daniel L. | 5 |
Smith, Nick L. | 4 |
Anna Shapiro | 3 |
Anne-Marie Faria | 3 |
Breno Braga | 3 |
Christina Weiland | 3 |
Clark, Sheldon B. | 3 |
Erica Greenberg | 3 |
Fitz-Gibbon, Carol Taylor | 3 |
Greene, Jennifer C. | 3 |
Howard Bloom | 3 |
More ▼ |
Publication Type
Education Level
Location
Illinois | 4 |
California | 3 |
Canada | 3 |
District of Columbia | 3 |
Louisiana (New Orleans) | 3 |
Massachusetts (Boston) | 3 |
New York (New York) | 3 |
United Kingdom | 3 |
Alabama | 2 |
Alaska | 2 |
Australia | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Collegiate Assessment of… | 1 |
Comprehensive Tests of Basic… | 1 |
Levels of Use of the… | 1 |
National Assessment of… | 1 |
Stanford Diagnostic Reading… | 1 |
What Works Clearinghouse Rating
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
Matthew J. Mayhew; Christa E. Winkler – Journal of Postsecondary Student Success, 2024
Higher education professionals often are tasked with providing evidence to stakeholders that programs, services, and practices implemented on their campuses contribute to student success. Furthermore, in the absence of a solid base of evidence related to effective practices, higher education researchers and practitioners are left questioning what…
Descriptors: Higher Education, Educational Practices, Evidence Based Practice, Program Evaluation
Christina Weiland; Rebecca Unterman; Susan Dynarski; Rachel Abenavoli; Howard Bloom; Breno Braga; Anne-Marie Faria; Erica Greenberg; Brian A. Jacob; Jane Arnold Lincove; Karen Manship; Meghan McCormick; Luke Miratrix; Tomás E. Monarrez; Pamela Morris-Perez; Anna Shapiro; Jon Valant; Lindsay Weixler – Grantee Submission, 2024
Lottery-based identification strategies offer potential for generating the next generation of evidence on U.S. early education programs. The authors' collaborative network of five research teams applying this design in early education settings and methods experts has identified six challenges that need to be carefully considered in this next…
Descriptors: Early Childhood Education, Program Evaluation, Evaluation Methods, Admission (School)
Christina Weiland; Rebecca Unterman; Susan Dynarski; Rachel Abenavoli; Howard Bloom; Breno Braga; Anne-Marie Faria; Erica Greenberg; Brian A. Jacob; Jane Arnold Lincove; Karen Manship; Meghan McCormick; Luke Miratrix; Tomás E. Monarrez; Pamela Morris-Perez; Anna Shapiro; Jon Valant; Lindsay Weixler – AERA Open, 2024
Lottery-based identification strategies offer potential for generating the next generation of evidence on U.S. early education programs. The authors' collaborative network of five research teams applying this design in early education settings and methods experts has identified six challenges that need to be carefully considered in this next…
Descriptors: Early Childhood Education, Program Evaluation, Evaluation Methods, Admission (School)
Bower, Kyle L. – American Journal of Evaluation, 2022
The purpose of this paper is to introduce the Five-Level Qualitative Data Analysis (5LQDA) method for ATLAS.ti as a way to intentionally design methodological approaches applicable to the field of evaluation. To demonstrate my analytical process using ATLAS.ti, I use examples from an existing evaluation of a STEM Peer Learning Assistant program.…
Descriptors: Qualitative Research, Data Analysis, Program Evaluation, Evaluation Methods
Christina Weiland; Rebecca Unterman; Susan Dynarski; Rachel Abenavoli; Howard Bloom; Breno Braga; Anne-Marie Faria; Erica Greenberg; Brian Jacob; Jane Arnold Lincove; Karen Manship; Meghan McCormick; Luke Miratrix; Tomás E. Monarrez; Pamela Morris-Perez; Anna Shapiro; Jon Valant; Lindsay Weixler – Annenberg Institute for School Reform at Brown University, 2023
Lottery-based identification strategies offer potential for generating the next generation of evidence on U.S. early education programs. Our collaborative network of five research teams applying this design in early education and methods experts has identified six challenges that need to be carefully considered in this next context: (1) available…
Descriptors: Early Childhood Education, Program Evaluation, Evaluation Methods, Admission (School)
Radhakrishna, Rama; Chaudhary, Anil Kumar; Tobin, Daniel – Journal of Extension, 2019
We present a framework to help those working in Extension connect program designs with appropriate evaluation designs to improve evaluation. The framework links four distinct Extension program domains--service, facilitation, content transformation, and transformative education--with three types of evaluation design--preexperimental,…
Descriptors: Extension Education, Program Design, Evaluation Methods, Research Design
Chelsea T. Morris – Sage Research Methods Cases, 2024
This case study is based on a program evaluation of a professional certificate program that trains early childhood care and education providers to build and support young children's emotional literacy. The research project described in the case study will address approaches to methodological combination, justifying research design and changes to…
Descriptors: Program Evaluation, Teacher Certification, Early Childhood Teachers, Early Childhood Education
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Hughes, Katherine L.; Miller, Trey; Reese, Kelly – Grantee Submission, 2021
This report from the Career and Technical Education (CTE) Research Network Lead team provides final results from an evaluability assessment of CTE programs that feasibly could be evaluated using a rigorous experimental design. Evaluability assessments (also called feasibility studies) are used in education and other fields, such as international…
Descriptors: Program Evaluation, Vocational Education, Evaluation Methods, Educational Research
Daniels, Katherine Nelson – ProQuest LLC, 2018
Traditional pre-test (TpT)/post-test (PT) and retrospective pre-test (RpT)/post-test (PT) designs are used to collect data on self-reported measures to assess the magnitude of change that occurs from interventions. If measurement invariance does not exist across the measurement occasions within these research designs, it is inappropriate to…
Descriptors: Pretests Posttests, Evaluation Methods, Intervention, Program Evaluation
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Eddy, Pamela L. – Community College Journal of Research and Practice, 2017
Funded grant projects all involve some form of evaluation, and Advanced Technological Education (ATE) grants are no exception. Program evaluation serves as a critical component not only for evaluating if a project has met its intended and desired outcomes, but the evaluation process is also a central feature of the grant application itself.…
Descriptors: Program Evaluation, Technology Education, Grants, Federal Programs
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education