NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Binglin Chen – ProQuest LLC, 2022
Assessment is a key component of education. Routine grading of students' work, however, is time consuming. Automating the grading process allows instructors to spend more of their time helping their students learn and engaging their students with more open-ended, creative activities. One way to automate grading is through computer-based…
Descriptors: College Students, STEM Education, Student Evaluation, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander Stanoyevitch – Discover Education, 2024
Online education, while not a new phenomenon, underwent a monumental shift during the COVID-19 pandemic, pushing educators and students alike into the uncharted waters of full-time digital learning. With this shift came renewed concerns about the integrity of online assessments. Amidst a landscape rapidly being reshaped by online exam/homework…
Descriptors: Computer Assisted Testing, Student Evaluation, Artificial Intelligence, Electronic Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nejdet Karadag – Journal of Educational Technology and Online Learning, 2023
The purpose of this study is to examine the impact of artificial intelligence (AI) on online assessment in the context of opportunities and threats based on the literature. To this end, 19 articles related to the AI tool ChatGPT and online assessment were analysed through rapid literature review. In the content analysis, the themes of "AI's…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Natural Language Processing, Grading
Guerrero, Tricia A.; Wiley, Jennifer – Grantee Submission, 2019
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning…
Descriptors: Computer Assisted Testing, Student Evaluation, Introductory Courses, Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Gerard, Libby; Kidron, Ady; Linn, Marcia C. – International Journal of Computer-Supported Collaborative Learning, 2019
This paper illustrates how the combination of teacher and computer guidance can strengthen collaborative revision and identifies opportunities for teacher guidance in a computer-supported collaborative learning environment. We took advantage of natural language processing tools embedded in an online, collaborative environment to automatically…
Descriptors: Computer Assisted Testing, Student Evaluation, Science Tests, Scoring
Previous Page | Next Page »
Pages: 1  |  2