NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Lixiang Yan; Lele Sha; Linxuan Zhao; Yuheng Li; Roberto Martinez-Maldonado; Guanliang Chen; Xinyu Li; Yueqiao Jin; Dragan Gaševic – British Journal of Educational Technology, 2024
Educational technology innovations leveraging large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (eg, question generation, feedback provision, and essay grading), there are…
Descriptors: Educational Technology, Artificial Intelligence, Natural Language Processing, Educational Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Carme Grimalt-Álvaro; Mireia Usart – Journal of Computing in Higher Education, 2024
Sentiment Analysis (SA), a technique based on applying artificial intelligence to analyze textual data in natural language, can help to characterize interactions between students and teachers and improve learning through timely, personalized feedback, but its use in education is still scarce. This systematic literature review explores how SA has…
Descriptors: Formative Evaluation, Higher Education, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Gillani, Nabeel; Eynon, Rebecca; Chiabaut, Catherine; Finkel, Kelsey – Educational Technology & Society, 2023
Recent advances in Artificial Intelligence (AI) have sparked renewed interest in its potential to improve education. However, AI is a loose umbrella term that refers to a collection of methods, capabilities, and limitations--many of which are often not explicitly articulated by researchers, education technology companies, or other AI developers.…
Descriptors: Artificial Intelligence, Technology Uses in Education, Educational Technology, Educational Benefits
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher Dann; Petrea Redmond; Melissa Fanshawe; Alice Brown; Seyum Getenet; Thanveer Shaik; Xiaohui Tao; Linda Galligan; Yan Li – Australasian Journal of Educational Technology, 2024
Making sense of student feedback and engagement is important for informing pedagogical decision-making and broader strategies related to student retention and success in higher education courses. Although learning analytics and other strategies are employed within courses to understand student engagement, the interpretation of data for larger data…
Descriptors: Artificial Intelligence, Learner Engagement, Feedback (Response), Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Rybinski, Krzysztof; Kopciuszewska, Elzbieta – Assessment & Evaluation in Higher Education, 2021
This article presents the first-ever big data study of the student evaluation of teaching (SET) using artificial intelligence (AI). We train natural language processing (NLP) models on 1.6 million student evaluations from the US and the UK. We address two research questions: (1) are these models able to predict student ratings from the student…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation of Teacher Performance, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Charlotte N. Gunawardena; Yan Chen; Nick Flor; Damien Sánchez – Online Learning, 2023
Gunawardena et al.'s (1997) Interaction Analysis Model (IAM) is one of the most frequently employed frameworks to guide the qualitative analysis of social construction of knowledge online. However, qualitative analysis is time consuming, and precludes immediate feedback to revise online courses while being delivered. To expedite analysis with a…
Descriptors: Models, Learning Processes, Knowledge Level, Online Courses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stone, Cathlyn; Donnelly, Patrick J.; Dale, Meghan; Capello, Sarah; Kelly, Sean; Godley, Amanda; D'Mello, Sidney K. – International Educational Data Mining Society, 2019
We examine the ability of supervised text classification models to identify several discourse properties from teachers' speech with an eye for providing teachers with meaningful automated feedback about the quality of their classroom discourse. We collected audio recordings from 28 teachers from 10 schools in 164 authentic classroom sessions,…
Descriptors: Classification, Classroom Communication, Audio Equipment, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Renu Balyan; Kristopher J. Kopp; Danielle S. McNamara – Grantee Submission, 2018
This study assesses the extent to which machine learning techniques can be used to predict question quality. An algorithm based on textual complexity indices was previously developed to assess question quality to provide feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). In…
Descriptors: Questioning Techniques, Artificial Intelligence, Networks, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Huy; Xiong, Wenting; Litman, Diane – International Journal of Artificial Intelligence in Education, 2017
A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…
Descriptors: Computer Uses in Education, Computer Mediated Communication, Feedback (Response), Peer Evaluation
ElMessiry, Adel Magdi – ProQuest LLC, 2016
Complaining is a fundamental human characteristic that has prevailed throughout the ages. We normally complain about something that went wrong. Patient complaints are no exception; they focus on problems that occurred during the episode of care. The Institute of Medicine estimated that each year thousands of patients die due to medical errors. The…
Descriptors: Patients, Health Services, Medical Services, Hospitals
Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C. – International Educational Data Mining Society, 2012
In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…
Descriptors: Educational Games, Feedback (Response), Classification, Expertise
Peer reviewed Peer reviewed
Direct linkDirect link
D'Mello, Sidney K.; Dowell, Nia; Graesser, Arthur – Journal of Experimental Psychology: Applied, 2011
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning,…
Descriptors: Intelligent Tutoring Systems, Prior Learning, Natural Language Processing, Tutoring
Previous Page | Next Page »
Pages: 1  |  2