NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Binglin Chen – ProQuest LLC, 2022
Assessment is a key component of education. Routine grading of students' work, however, is time consuming. Automating the grading process allows instructors to spend more of their time helping their students learn and engaging their students with more open-ended, creative activities. One way to automate grading is through computer-based…
Descriptors: College Students, STEM Education, Student Evaluation, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander Stanoyevitch – Discover Education, 2024
Online education, while not a new phenomenon, underwent a monumental shift during the COVID-19 pandemic, pushing educators and students alike into the uncharted waters of full-time digital learning. With this shift came renewed concerns about the integrity of online assessments. Amidst a landscape rapidly being reshaped by online exam/homework…
Descriptors: Computer Assisted Testing, Student Evaluation, Artificial Intelligence, Electronic Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nejdet Karadag – Journal of Educational Technology and Online Learning, 2023
The purpose of this study is to examine the impact of artificial intelligence (AI) on online assessment in the context of opportunities and threats based on the literature. To this end, 19 articles related to the AI tool ChatGPT and online assessment were analysed through rapid literature review. In the content analysis, the themes of "AI's…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Natural Language Processing, Grading
Guerrero, Tricia A.; Wiley, Jennifer – Grantee Submission, 2019
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning…
Descriptors: Computer Assisted Testing, Student Evaluation, Introductory Courses, Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Gerard, Libby; Kidron, Ady; Linn, Marcia C. – International Journal of Computer-Supported Collaborative Learning, 2019
This paper illustrates how the combination of teacher and computer guidance can strengthen collaborative revision and identifies opportunities for teacher guidance in a computer-supported collaborative learning environment. We took advantage of natural language processing tools embedded in an online, collaborative environment to automatically…
Descriptors: Computer Assisted Testing, Student Evaluation, Science Tests, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benli, Ibrahim; Ismailova, Rita – International Technology and Education Journal, 2018
In the 21st century, which is characterized as the Information Age, information access, knowledge quick learning is vital to the development of individuals and societies. With the use of technological innovations in the field of education in the information society, it will be possible to acquire a lasting place in the globalizing world. Distance…
Descriptors: Foreign Countries, Distance Education, Evaluation Methods, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David; Stenhaug, Ben; Kizilcec, Rene – Grantee Submission, 2019
This research evaluates the psychometric properties of short-answer response items under a variety of grading rules in the context of a mobile learning platform in Africa. This work has three main findings. First, we introduce the concept of a differential device function (DDF), a type of differential item function that stems from the device a…
Descriptors: Foreign Countries, Psychometrics, Test Items, Test Format
Previous Page | Next Page »
Pages: 1  |  2