NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission128
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…1
Showing 1 to 15 of 128 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Goldshtein; Jaclyn Ocumpaugh; Andrew Potter; Rod D. Roscoe – Grantee Submission, 2024
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language…
Descriptors: Language Attitudes, Computational Linguistics, Computer Software, Natural Language Processing
Peer reviewed Peer reviewed
Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olney, Andrew M. – Grantee Submission, 2021
This paper explores a general approach to paraphrase generation using a pre-trained seq2seq model fine-tuned using a back-translated anatomy and physiology textbook. Human ratings indicate that the paraphrase model generally preserved meaning and grammaticality/fluency: 70% of meaning ratings were above 75, and 40% of paraphrases were considered…
Descriptors: Translation, Language Processing, Error Analysis (Language), Grammar
Peer reviewed Peer reviewed
Raquel G. Alhama; Ruthe Foushee; Dan Byrne; Allyson Ettinger; Susan Goldin-Meadow; Afra Alishahi – Grantee Submission, 2023
Having heard "a pimwit", English-speakers assume that "the pimwit" is also possible. This type of productivity is attributed to syntactic categories such as NOUN and DETERMINER, but the key question is "how" do humans become endowed with these categories in the first place. We propose a novel approach that combines…
Descriptors: English, Nouns, Child Language, Native Language
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jianing Zhou; Ziheng Zeng; Hongyu Gong; Suma Bhat – Grantee Submission, 2022
Idiomatic expressions (IEs) play an essential role in natural language. In this paper, we study the task of idiomatic sentence paraphrasing (ISP), which aims to paraphrase a sentence with an IE by replacing the IE with its literal paraphrase. The lack of large scale corpora with idiomatic-literal parallel sentences is a primary challenge for this…
Descriptors: Language Patterns, Sentences, Language Processing, Phrase Structure
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhongdi Wu; Eric Larson; Makoto Sano; Doris Baker; Nathan Gage; Akihito Kamata – Grantee Submission, 2023
In this investigation we propose new machine learning methods for automated scoring models that predict the vocabulary acquisition in science and social studies of second grade English language learners, based upon free-form spoken responses. We evaluate performance on an existing dataset and use transfer learning from a large pre-trained language…
Descriptors: Prediction, Vocabulary Development, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages
Ruseti, Stefan; Dascalu, Maria-Dorinela; Corlatescu, Dragos-Georgian; Dascalu, Mihai; Trausan-Matu, Stefan; McNamara, Danielle S. – Grantee Submission, 2021
Dialogism is a philosophical theory centered on the idea that life involves a dialogue among multiple voices in a continuous exchange and interaction. Considering human language, different ideas or points of view take the form of voices, which spread throughout any discourse and influence it. From a computational point of view, voices can be…
Descriptors: Dialogs (Language), Computational Linguistics, Semantics, Models
Li, Haiying; Graesser, Art C. – Grantee Submission, 2020
This study investigated the impact of conversational agent formality on the quality of summaries and formality of written summaries during the training session and on posttest in a trialog-based intelligent tutoring system (ITS). During training, participants learned summarization strategies with the guidance of conversational agents who spoke one…
Descriptors: Intelligent Tutoring Systems, Writing Instruction, Writing Skills, Language Styles
Botarleanu, Robert-Mihai; Dascalu, Mihai; Watanabe, Micah; McNamara, Danielle S.; Crossley, Scott Andrew – Grantee Submission, 2021
The ability to objectively quantify the complexity of a text can be a useful indicator of how likely learners of a given level will comprehend it. Before creating more complex models of assessing text difficulty, the basic building block of a text consists of words and, inherently, its overall difficulty is greatly influenced by the complexity of…
Descriptors: Multilingualism, Language Acquisition, Age, Models
Ruthe Foushee; Dan Byrne; Marisa Casillas; Susan Goldin-Meadow – Grantee Submission, 2022
Linguistic alignment--the contingent reuse of our interlocutors' language at all levels of linguistic structure--pervades human dialogue. Here, we design unique measures to capture the degree of linguistic alignment between interlocutors' linguistic representations at three levels of structure: lexical, syntactic, and semantic. We track these…
Descriptors: Semantics, Syntax, Vocabulary Skills, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Priti Oli; Rabin Banjade; Arun Balajiee Lekshmi Narayanan; Peter Brusilovsky; Vasile Rus – Grantee Submission, 2023
Self-efficacy, or the belief in one's ability to accomplish a task or achieve a goal, can significantly influence the effectiveness of various instructional methods to induce learning gains. The importance of self-efficacy is particularly pronounced in complex subjects like Computer Science, where students with high self-efficacy are more likely…
Descriptors: Computer Science Education, College Students, Self Efficacy, Programming
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Cioaca, Valentin Sergiu; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Numerous approaches have been introduced to automate the process of text summarization, but only few can be easily adapted to multiple languages. This paper introduces a multilingual text processing pipeline integrated in the open-source "ReaderBench" framework, which can be retrofit to cover more than 50 languages. While considering the…
Descriptors: Documentation, Computer Software, Open Source Technology, Algorithms
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9