ERIC Number: ED663040
Record Type: Non-Journal
Publication Date: 2024-Sep-19
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Conducting Impact Evaluations When Formative Data Leads to Yearly Programmatic Change: Insights for Educational Researchers
Kristin Gagnier; Laura Holian; Melissa Kurman
Society for Research on Educational Effectiveness
Evaluation designs seek to provide evidence of impact and collect formative data to understand and improve programs and inform decisions. Formative data is commonly used at the onset of developing a new program to modify and improve before full implementation. However, formative data is often collected throughout the life of the program to shape timely decision-making and inform continuous quality improvement of the program. This is especially true for cohort evaluation studies that aim to assess the impact of a program across multiple years or cohorts. This leads to two intertwined evaluation issues: (1) program developers rely on the best knowledge and information available at the moment to inform their decision-making and cannot wait for 5 years to make decisions; (2) responding to formative data leads to ongoing and continuous programmatic change and this change means that an ongoing impact evaluation across cohorts is actually evaluating different variations of the program year to year. Educational researchers and evaluators must be skilled in designing evaluations that include timely formative data that can be used to make decisions while at the same time addressing these changes year to year through detailed tracking and communication and potentially addressing these changes methodologically. They also need to be skilled in communicating with educational practitioners and other stakeholders in ways that both preserve the integrity of the study, while at the same time allowing program developers to engage in programmatic tweaks and continuous quality improvement. In this paper, we will explore these issues by using applied examples from an evaluation of FirstHand, an informal STEM education program. FirstHand is a program of the non-profit University City Science Center and delivers free, multi-week STEM programming to middle and high school students who attend some of Philadelphia's most systematically under-resourced schools. Two years ago, our team partnered with FirstHand to conduct an impact study of the program through a 5-year randomized-controlled trial to understand how participating in Firsthand impacts students' beliefs, feelings of belonging in science, and achievement in science, in addition to formative data -- which will be the focus of this paper. Through this work, we have collected two types of formative data (1) observations to assess program quality using the STEM Program Quality Assessment (STEM-PQA) to determine how and when the program is using practices known to facilitate positive experiences and youth development, and (2) semi-structured interviews with small groups of students and school staff to explore perceived impact across different school cultures. Together, these data have offered new perspectives that have shaped decision-making and yielded timely programmatic improvements at both the program structure and staff level. At the program level, program observations and student and partner interviews have led to changes in daily programmatic activities such as curated daily reflection pages in student lab notebooks, and the addition of individual and class goal-setting to each session. At the staff level, formative data has provided self-directed professional learning opportunities for facilitators to review recordings of sessions they taught, and use those recordings to identify growth opportunities and facilitation goals based on the STEM-PQA rubric. With multi-cohort designs, there are always natural changes that occur from year to year, yet these changes have occurred in the first two years of our randomized controlled trial and are a direct consequence of the evaluation design. Based on the formative data, the facilitators are growing in their facilitation techniques and implementing new practices. They are now using the fidelity of implementation data to adjust and improve their own practice. Thus, in Year 3, the program will not be the same as it was in Year 1. How should evaluators and researchers handle such changes, particularly in the context of an RCT that will have additional cohorts? The approach taken by our team has been to document and track all changes as transparently as possible, while ensuring the key components of the program remain the same. In this paper, we will share these lessons and open up our presentation to additional attendee dialogue and conversation. Our goal is to open up a meaningful dialogue between program developers and evaluators to explore how and what formative data collection opportunities can lead to continued program discovery and innovation well into the lifespan of all programs and how best to document and address these changes in an ongoing impact evaluation.
Descriptors: Educational Research, Research Design, Formative Evaluation, Evaluation Research, STEM Education, Change
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A