Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 19 |
Descriptor
Feedback (Response) | 19 |
Formative Evaluation | 19 |
Natural Language Processing | 19 |
Artificial Intelligence | 9 |
Automation | 9 |
Computer Assisted Testing | 7 |
Essays | 7 |
High School Students | 7 |
Student Evaluation | 7 |
Technology Uses in Education | 7 |
Summative Evaluation | 5 |
More ▼ |
Source
Author
McNamara, Danielle S. | 5 |
Allen, Laura K. | 2 |
Crossley, Scott A. | 2 |
Likens, Aaron D. | 2 |
Agus Wedi | 1 |
Araz Zirar | 1 |
Benotti, Luciana | 1 |
Boles, Wageeh | 1 |
Carme Grimalt-Álvaro | 1 |
Clayton Cohn | 1 |
Correnti, R. | 1 |
More ▼ |
Publication Type
Reports - Research | 16 |
Journal Articles | 14 |
Information Analyses | 2 |
Speeches/Meeting Papers | 2 |
Collected Works - Proceedings | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
High Schools | 6 |
Higher Education | 6 |
Postsecondary Education | 6 |
Secondary Education | 4 |
Elementary Education | 3 |
Grade 5 | 3 |
Grade 6 | 3 |
Intermediate Grades | 3 |
Middle Schools | 3 |
Grade 4 | 2 |
Early Childhood Education | 1 |
More ▼ |
Audience
Location
Germany | 2 |
Argentina | 1 |
Asia | 1 |
Australia | 1 |
Brazil | 1 |
Canada | 1 |
Connecticut | 1 |
Denmark | 1 |
Egypt | 1 |
Estonia | 1 |
Florida | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Elisabeth Bauer; Michael Sailer; Frank Niklas; Samuel Greiff; Sven Sarbu-Rothsching; Jan M. Zottmann; Jan Kiesewetter; Matthias Stadler; Martin R. Fischer; Tina Seidel; Detlef Urhahne; Maximilian Sailer; Frank Fischer – Journal of Computer Assisted Learning, 2025
Background: Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks…
Descriptors: Artificial Intelligence, Feedback (Response), Computer Simulation, Natural Language Processing
Steffen Steinert; Karina E. Avila; Stefan Ruzika; Jochen Kuhn; Stefan Küchemann – Smart Learning Environments, 2024
Effectively supporting students in mastering all facets of self-regulated learning is a central aim of teachers and educational researchers. Prior research could demonstrate that formative feedback is an effective way to support students during self-regulated learning. In this light, we propose the application of Large Language Models (LLMs) to…
Descriptors: Formative Evaluation, Feedback (Response), Natural Language Processing, Artificial Intelligence
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)
Moriah Ariely; Tanya Nazaretsky; Giora Alexandron – Journal of Research in Science Teaching, 2024
One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals…
Descriptors: Biology, Automation, Individualized Instruction, Science Instruction
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Carme Grimalt-Álvaro; Mireia Usart – Journal of Computing in Higher Education, 2024
Sentiment Analysis (SA), a technique based on applying artificial intelligence to analyze textual data in natural language, can help to characterize interactions between students and teachers and improve learning through timely, personalized feedback, but its use in education is still scarce. This systematic literature review explores how SA has…
Descriptors: Formative Evaluation, Higher Education, Artificial Intelligence, Natural Language Processing
Araz Zirar – Review of Education, 2023
Recent developments in language models, such as ChatGPT, have sparked debate. These tools can help, for example, dyslexic people, to write formal emails from a prompt and can be used by students to generate assessed work. Proponents argue that language models enhance the student experience and academic achievement. Those concerned argue that…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Models
Vittorini, Pierpaolo; Menini, Stefano; Tonelli, Sara – International Journal of Artificial Intelligence in Education, 2021
Massive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment activity, in particular, is demanding in terms of both time and effort; thus, the use of artificial intelligence can be useful to address and reduce the time and effort required. This paper…
Descriptors: Artificial Intelligence, Formative Evaluation, Summative Evaluation, Data
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
Zhang, H.; Magooda, A.; Litman, D.; Correnti, R.; Wang, E.; Matsumura, L. C.; Howe, E.; Quintana, R. – Grantee Submission, 2019
Writing a good essay typically involves students revising an initial paper draft after receiving feedback. We present eRevise, a web-based writing and revising environment that uses natural language processing features generated for rubric-based essay scoring to trigger formative feedback messages regarding students' use of evidence in…
Descriptors: Formative Evaluation, Essays, Writing (Composition), Revision (Written Composition)
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation
Nguyen, Huy; Xiong, Wenting; Litman, Diane – International Journal of Artificial Intelligence in Education, 2017
A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…
Descriptors: Computer Uses in Education, Computer Mediated Communication, Feedback (Response), Peer Evaluation
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing may more accurately capture their…
Descriptors: Writing (Composition), Persuasive Discourse, Essays, Language Usage
Previous Page | Next Page »
Pages: 1 | 2