NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED656932
Record Type: Non-Journal
Publication Date: 2021-Sep-27
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Examining the Psychometric Properties of an Observational Measure of Interactional Quality in Early Education and Care Settings
Lily Fritz; Emily Hanno; Junlei Li; Stephanie Jones; Nonie Lesaux
Society for Research on Educational Effectiveness
Background & Context: Early childhood contexts characterized by high-quality adult-child interactions have important consequences for children's development (Hamre, 2014; Peisner-Feinberg et al., 2001). The use of classroom observation tools to measure the quality of such interactions in early education and care (EEC) settings has become commonplace in both research and practice as part of quality rating and improvement systems across the United States. One such example is the Classroom Assessment Scoring System (CLASS; Pianta et al., 2008), which is used in over 44% of states' QRIS (ECQA Center, 2017). While existing tools tend to offer overall insight into global features of quality within early childhood classrooms, they have important practical and psychometric limitations (Mantzicopoulos et al., 2018; Sandilos & DiPerna, 2011). Additionally, these tools adopt a near-exclusive focus on school and center-based early childhood contexts (Li, 2019), making it challenging to compare quality across the multitude of informal and formal arrangements upon which young children and their families rely. This study offers initial evidence for the Simple Interactions (SI) Tool, a measure rooted in practice that aims to capture the nuance and complexity that characterize dyadic interactions between adults and children across the full EEC landscape. Research Questions: The aims of this exploratory study were to examine the psychometric characteristics of the SI Tool in order to refine its use as a quality observation instrument. Specifically, our primary aim was to decompose the reliability of SI dimension scores by sources of variance (settings, items, and occasions). Our secondary aim was to investigate the concurrent validity of the tool by examining the extent to which scores on the SI Tool were associated with existing measures of interaction quality. Setting: Data were collected as part a larger longitudinal study of young children and their EEC experiences in Massachusetts. The analytic sample of EEC settings in the current study represent the full range of EEC program types and geographic diversity of the state. Participants: Participants in this study include adult caregivers in 694 EEC settings, of which over half were school-based settings (40.06% kindergarten and 10.23% pre-k classrooms). Other program types included other group-based settings (25.22% community-based centers, 8.79% Head Start centers, 6.05% family child care settings), as well as more informal, unlicensed settings (9.65%; e.g., parent, grandparent, and non-relative care). Program & Research Design: Observations of EEC settings using the SI Tool were conducted during the second year of the longitudinal study (2018-19). Settings were recruited to participate in the study if they had a child participating in the broader longitudinal study that began in 2017. Children were recruited to the study through a number of diverse approaches to ensure sociodemographic and geographic representation across the state. Data Collection and Analysis: In-person observations were conducted by trained observers over the course of several hours. During these observations, observers applied the SI Tool (Li, 2014) to capture four key dimensions of high-quality adult-child interactions. Observations using the SI Tool were made up of multiple, short 5-10 minute occasions (or "sweeps"). In each of these sweeps, a single observer assessed the quality of observed interactions along the four SI dimensions using a five-point scale (with a score of 1 representing low and 5 high quality). Although observers aimed to observe for five sweeps per setting, the dynamic nature of the EEC settings meant that in many cases, observers were unable to conduct a full set of sweeps. Consequently, the following analyses use data from four sweeps, which was the modal number of sweeps across the sample and allows for maximum sample retention. To address the first research question, a fully crossed G study was carried out to decompose SI score variance by relevant facets of error (settings, items, and occasions), with a subsequent D study analyzing the reliability and precision of score estimates. To address the second research aim regarding concurrent validity analyses, pairwise correlations were conducted between SI and CLASS subscale scores to determine the degree of overlap in constructs measured by the two scales. Results: Variance components estimated by the model are summarized and suggest that almost one third (29.9%) of the variance in observed mean SI scores can be explained by true score variance (i.e., between-setting differences), representing around 0.61 standard deviations on the 1-5 SI scoring scale. A further 12.7% of the variation in SI dimension scores comes from variation within settings from sweep to sweep. Subsequent D study analyses estimate that the absolute reliability coefficient for a four-item, four-occasion study design is relatively high at 0.74. Moreover, significant correlations between SI and CLASS dimension scores provide evidence for the concurrent validity of the SI Tool (see Table 4), though the relatively low correlation values between subscale scores on the two measures nonetheless suggest that each tool likely captures distinct aspects of interaction quality. Conclusions: Findings from this study have key implications for the field of EEC quality measurement. The relatively high generalizability coefficient offers evidence for the reliability of this tool as a measure of interactional quality, but substantial setting by occasion ("sweep-level") variation in SI scores also offers a compelling argument for conducting multiple observations of the same setting in a variety of contexts and activities. Likewise, the results of the concurrent validity analyses suggested that while SI item scores are significantly correlated with some of the CLASS subscale scores, the estimated relationships are relatively weak. Taken together, these findings indicate that interaction quality may not be a static characteristic of an EEC setting but a rather more fluid and dynamic process that is in constant flux over the course of the day. This study suggests that there is some merit to using more fine-grained repeated measures of interaction quality like the SI Tool as an enhanced supplement to more global measures like CLASS. Additional implications for practice and ongoing research -- as well as the limitations of this study -- will be explored in the final presentation.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Early Childhood Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Massachusetts
Identifiers - Assessments and Surveys: Classroom Assessment Scoring System
Grant or Contract Numbers: N/A
Author Affiliations: N/A