Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Grantee Submission | 10 |
Author
Publication Type
Reports - Research | 8 |
Speeches/Meeting Papers | 7 |
Reports - Evaluative | 2 |
Journal Articles | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Mississippi | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Philip I. Pavlik; Luke G. Eglington – Grantee Submission, 2023
This paper presents a tool for creating student models in logistic regression. Creating student models has typically been done by expert selection of the appropriate terms, beginning with models as simple as IRT or AFM but more recently with highly complex models like BestLR. While alternative methods exist to select the appropriate predictors for…
Descriptors: Students, Models, Regression (Statistics), Alternative Assessment
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Botarleanu, Robert-Mihai; Dascalu, Mihai; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2020
A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified…
Descriptors: Natural Language Processing, Writing Skills, Difficulty Level, Reading Comprehension
Stewart, Angela E. B.; Vrzakova, Hana; Sun, Chen; Yonehiro, Jade; Stone, Cathlyn Adele; Duran, Nicholas D.; Shute, Valerie; D'Mello, Sidney K. – Grantee Submission, 2019
Collaborative problem solving (CPS) is a crucial 21st century skill; however, current technologies fall short of effectively supporting CPS processes, especially for remote, computer-enabled interactions. In order to develop next-generation computer-supported collaborative systems that enhance CPS processes and outcomes by monitoring and…
Descriptors: Problem Solving, Cooperative Learning, Language Usage, Speech Communication
Cai, Zhiqiang; Li, Hiyiang; Hu, Xiangen; Graesser, Art – Grantee Submission, 2016
This paper provides an alternative way of document representation by treating topic probabilities as a vector representation for words and representing a document as a combination of the word vectors. A comparison on summary data shows that this representation is more effective in document classification. [This paper was published in:…
Descriptors: Probability, Natural Language Processing, Models, Automation
Gorin, Joanna S.; O'Reilly, Tenaha; Sabatini, John; Song, Yi; Deane, Paul – Grantee Submission, 2014
Recent advances in cognitive science and psychometrics have expanded the possibilities for the next generation of literacy assessment as an integrated domain (Bennett, 2011a; Deane, Sabatini, & O'Reilly, 2011; Leighton & Gierl, 2011; Sabatini, Albro, & O'Reilly, 2012). In this paper, we discuss four key areas supporting innovations in…
Descriptors: Literacy Education, Evaluation Methods, Measurement Techniques, Student Evaluation
McNamara, Danielle S.; Crossley, Scott A.; Roscoe, Rod – Grantee Submission, 2013
The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the natural language processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Writing Instruction, Feedback (Response)