Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 15 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 22 |
Descriptor
Models | 22 |
Natural Language Processing | 22 |
Artificial Intelligence | 10 |
Semantics | 10 |
Reading Comprehension | 8 |
Automation | 7 |
Classification | 6 |
Computational Linguistics | 6 |
Intelligent Tutoring Systems | 5 |
Scores | 5 |
Scoring | 5 |
More ▼ |
Source
Grantee Submission | 22 |
Author
Publication Type
Reports - Research | 20 |
Speeches/Meeting Papers | 14 |
Journal Articles | 5 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Elementary Education | 2 |
High Schools | 2 |
Higher Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
Audience
Location
Florida | 1 |
Mississippi | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Razvan Paroiu; Stefan Ruseti; Mihai Dascalu; Stefan Trausan-Matu; Danielle S. McNamara – Grantee Submission, 2023
The exponential growth of scientific publications increases the effort required to identify relevant articles. Moreover, the scale of studies is a frequent barrier to research as the majority of studies are low or medium-scaled and do not generalize well while lacking statistical power. As such, we introduce an automated method that supports the…
Descriptors: Science Education, Educational Research, Scientific and Technical Information, Journal Articles
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Botarleanu, Robert-Mihai; Dascalu, Mihai; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2020
A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified…
Descriptors: Natural Language Processing, Writing Skills, Difficulty Level, Reading Comprehension
Robert-Mihai Botarleanu; Micah Watanabe; Mihai Dascalu; Scott A. Crossley; Danielle S. McNamara – Grantee Submission, 2023
Age of Acquisition (AoA) scores approximate the age at which a language speaker fully understands a word's semantic meaning and represent a quantitative measure of the relative difficulty of words in a language. AoA word lists exist across various languages, with English having the most complete lists that capture the largest percentage of the…
Descriptors: Multilingualism, English (Second Language), Second Language Learning, Second Language Instruction
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Memory, Inferences, Syntax
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Marilena Panaite; Mihai Dascalu; Amy Johnson; Renu Balyan; Jianmin Dai; Danielle S. McNamara; Stefan Trausan-Matu – Grantee Submission, 2018
Intelligent Tutoring Systems (ITSs) are aimed at promoting acquisition of knowledge and skills by providing relevant and appropriate feedback during students' practice activities. ITSs for literacy instruction commonly assess typed responses using Natural Language Processing (NLP) algorithms. One step in this direction often requires building a…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Algorithms, Decision Making
Previous Page | Next Page ยป
Pages: 1 | 2