NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20252
Since 202424
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mike Perkins; Jasper Roe; Darius Postma; James McGaughran; Don Hickerson – Journal of Academic Ethics, 2024
This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI's ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors…
Descriptors: Artificial Intelligence, Student Evaluation, Identification, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Shilan Shafiei – Language Testing in Asia, 2024
The present study aimed to develop an analytic assessment rubric for the consecutive interpreting course in the educational setting in the Iranian academic context. To this end, the general procedure of rubric development, including data preparation, selection, and refinement, was applied. The performance criteria were categorized into content,…
Descriptors: Scoring Rubrics, Translation, Language Processing, Second Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Margaret Bearman; Joanna Tai; Phillip Dawson; David Boud; Rola Ajjawi – Assessment & Evaluation in Higher Education, 2024
Generative artificial intelligence (AI) has rapidly increased capacity for producing textual, visual and auditory outputs, yet there are ongoing concerns regarding the quality of those outputs. There is an urgent need to develop students' evaluative judgement - the capability to judge the quality of work of self and others - in recognition of this…
Descriptors: Evaluative Thinking, Skill Development, Artificial Intelligence, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew Landers – Higher Education for the Future, 2025
This article presents a brief overview of the state-of-the-art in large language models (LLMs) like ChatGPT and discusses the difficulties that these technologies create for educators with regard to assessment. Making use of the 'arms race' metaphor, this article argues that there are no simple solutions to the 'AI problem'. Rather, this author…
Descriptors: Ethics, Cheating, Plagiarism, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alexandra Farazouli; Teresa Cerratto-Pargman; Klara Bolander-Laksov; Cormac McGrath – Assessment & Evaluation in Higher Education, 2024
AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination…
Descriptors: Artificial Intelligence, Natural Language Processing, Student Evaluation, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Jiahui Luo – Assessment & Evaluation in Higher Education, 2024
This study offers a critical examination of university policies developed to address recent challenges presented by generative AI (GenAI) to higher education assessment. Drawing on Bacchi's 'What's the problem represented to be' (WPR) framework, we analysed the GenAI policies of 20 world-leading universities to explore what are considered problems…
Descriptors: Artificial Intelligence, Educational Policy, College Students, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Dirk H. R. Spennemann; Jessica Biles; Lachlan Brown; Matthew F. Ireland; Laura Longmore; Clare L. Singh; Anthony Wallis; Catherine Ward – Interactive Technology and Smart Education, 2024
Purpose: The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi can be used to obtain guidance on how to avoid detection when commissioning and submitting contract-written assignments and how workable the offered solutions…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Moriah Ariely; Tanya Nazaretsky; Giora Alexandron – Journal of Research in Science Teaching, 2024
One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals…
Descriptors: Biology, Automation, Individualized Instruction, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Héctor J. Pijeira-Díaz; Shashank Subramanya; Janneke van de Pol; Anique de Bruin – Journal of Computer Assisted Learning, 2024
Background: When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real-time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the…
Descriptors: Learning Analytics, Automation, Student Evaluation, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sebastian Gombert; Aron Fink; Tornike Giorgashvili; Ioana Jivet; Daniele Di Mitri; Jane Yau; Andreas Frey; Hendrik Drachsler – International Journal of Artificial Intelligence in Education, 2024
Various studies empirically proved the value of highly informative feedback for enhancing learner success. However, digital educational technology has yet to catch up as automated feedback is often provided shallowly. This paper presents a case study on implementing a pipeline that provides German-speaking university students enrolled in an…
Descriptors: Automation, Student Evaluation, Essays, Feedback (Response)
Previous Page | Next Page »
Pages: 1  |  2