ERIC Number: EJ1442668
Record Type: Journal
Publication Date: 2024-Oct
Pages: 37
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1076-9986
EISSN: EISSN-1935-1054
Available Date: N/A
Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial
Reagan Mozer; Luke Miratrix; Jackie Eunjung Relyea; James S. Kim
Journal of Educational and Behavioral Statistics, v49 n5 p780-816 2024
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This process is both time and labor-intensive, which creates a persistent barrier for large-scale assessments of text. Furthermore, enriching one's understanding of a found impact on text outcomes via secondary analyses can be difficult without additional scoring efforts. The purpose of this article is to provide a pipeline for using machine-based text analytic and data mining tools to augment traditional text-based impact analysis by analyzing impacts across an array of automatically generated text features. In this way, we can explore what an overall impact signifies in terms of how the text has evolved due to treatment. Through a case study based on a recent field trial in education, we show that machine learning can indeed enrich experimental evaluations of text by providing a more comprehensive and fine-grained picture of the mechanisms that lead to stronger argumentative writing in a first- and second-grade content literacy intervention. Relying exclusively on human scoring, by contrast, is a lost opportunity. Overall, the workflow and analytical strategy we describe can serve as a template for researchers interested in performing their own experimental evaluations of text.
Descriptors: Scoring, Evaluation Methods, Writing Evaluation, Comparative Analysis, Computer Software, Learning Analytics, Artificial Intelligence, Persuasive Discourse, Grade 1, Grade 2, Elementary School Students, Intervention, Evaluators, Literacy Education, Essays, Writing Instruction
SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com
Publication Type: Journal Articles; Reports - Research
Education Level: Early Childhood Education; Elementary Education; Grade 1; Primary Education; Grade 2
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305D220032
Author Affiliations: N/A