NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Elisabeth Bauer; Michael Sailer; Frank Niklas; Samuel Greiff; Sven Sarbu-Rothsching; Jan M. Zottmann; Jan Kiesewetter; Matthias Stadler; Martin R. Fischer; Tina Seidel; Detlef Urhahne; Maximilian Sailer; Frank Fischer – Journal of Computer Assisted Learning, 2025
Background: Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks…
Descriptors: Artificial Intelligence, Feedback (Response), Computer Simulation, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Steffen Steinert; Karina E. Avila; Stefan Ruzika; Jochen Kuhn; Stefan Küchemann – Smart Learning Environments, 2024
Effectively supporting students in mastering all facets of self-regulated learning is a central aim of teachers and educational researchers. Prior research could demonstrate that formative feedback is an effective way to support students during self-regulated learning. In this light, we propose the application of Large Language Models (LLMs) to…
Descriptors: Formative Evaluation, Feedback (Response), Natural Language Processing, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Marrone, Rebecca; Cropley, David H.; Wang, Z. – Creativity Research Journal, 2023
Creativity is now accepted as a core 21st-century competency and is increasingly an explicit part of school curricula around the world. Therefore, the ability to assess creativity for both formative and summative purposes is vital. However, the "fitness-for-purpose" of creativity tests has recently come under scrutiny. Current creativity…
Descriptors: Automation, Evaluation Methods, Creative Thinking, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ariely, Moriah; Nazaretsky, Tanya; Alexandron, Giora – International Journal of Artificial Intelligence in Education, 2023
Machine learning algorithms that automatically score scientific explanations can be used to measure students' conceptual understanding, identify gaps in their reasoning, and provide them with timely and individualized feedback. This paper presents the results of a study that uses Hebrew NLP to automatically score student explanations in Biology…
Descriptors: Artificial Intelligence, Algorithms, Natural Language Processing, Hebrew
Peer reviewed Peer reviewed
Direct linkDirect link
Moriah Ariely; Tanya Nazaretsky; Giora Alexandron – Journal of Research in Science Teaching, 2024
One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals…
Descriptors: Biology, Automation, Individualized Instruction, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Peer reviewed Peer reviewed
Direct linkDirect link
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Carme Grimalt-Álvaro; Mireia Usart – Journal of Computing in Higher Education, 2024
Sentiment Analysis (SA), a technique based on applying artificial intelligence to analyze textual data in natural language, can help to characterize interactions between students and teachers and improve learning through timely, personalized feedback, but its use in education is still scarce. This systematic literature review explores how SA has…
Descriptors: Formative Evaluation, Higher Education, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chahna Gonsalves – Journal of Learning Development in Higher Education, 2023
Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the COVID-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Araz Zirar – Review of Education, 2023
Recent developments in language models, such as ChatGPT, have sparked debate. These tools can help, for example, dyslexic people, to write formal emails from a prompt and can be used by students to generate assessed work. Proponents argue that language models enhance the student experience and academic achievement. Those concerned argue that…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Models
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Journal of Response to Writing, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Vittorini, Pierpaolo; Menini, Stefano; Tonelli, Sara – International Journal of Artificial Intelligence in Education, 2021
Massive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment activity, in particular, is demanding in terms of both time and effort; thus, the use of artificial intelligence can be useful to address and reduce the time and effort required. This paper…
Descriptors: Artificial Intelligence, Formative Evaluation, Summative Evaluation, Data
Previous Page | Next Page »
Pages: 1  |  2