NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Afrin, Tazin; Wang, Elaine; Litman, Diane; Matsumura, Lindsay C.; Correnti, Richard – Grantee Submission, 2020
Automated writing evaluation systems can improve students' writing insofar as students attend to the feedback provided and revise their essay drafts in ways aligned with such feedback. Existing research on revision of argumentative writing in such systems, however, has focused on the types of revisions students make (e.g., surface vs. content)…
Descriptors: Writing (Composition), Persuasive Discourse, Revision (Written Composition), Documentation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dascalu, Mihai; Allen, Laura K.; McNamara, Danielle S.; Trausan-Matu, Stefan; Crossley, Scott A. – Grantee Submission, 2017
Dialogism provides the grounds for building a comprehensive model of discourse and it is focused on the multiplicity of perspectives (i.e., voices). Dialogism can be present in any type of text, while voices become themes or recurrent topics emerging from the discourse. In this study, we examine the extent that differences between…
Descriptors: Dialogs (Language), Protocol Analysis, Discourse Analysis, Automation
Sano, Makoto; Baker, Doris Luft; Collazo, Marlen; Le, Nancy; Kamata, Akihito – Grantee Submission, 2020
Purpose: Explore how different automated scoring (AS) models score reliably the expressive language and vocabulary knowledge in depth of young second grade Latino English learners. Design/methodology/approach: Analyze a total of 13,471 English utterances from 217 Latino English learners with random forest, end-to-end memory networks, long…
Descriptors: English Language Learners, Hispanic American Students, Elementary School Students, Grade 2
Danielle S. McNamara; Scott A. Crossley; Rod D. Roscoe; Laura K. Allen; Jianmin Dai – Grantee Submission, 2015
This study evaluates the use of a hierarchical classification approach to automated assessment of essays. Automated essay scoring (AES) generally relies onmachine learning techniques that compute essay scores using a set of text variables. Unlike previous studies that rely on regression models, this study computes essay scores using a hierarchical…
Descriptors: Automation, Scoring, Essays, Persuasive Discourse