NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 78 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mike Perkins; Jasper Roe; Darius Postma; James McGaughran; Don Hickerson – Journal of Academic Ethics, 2024
This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI's ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors…
Descriptors: Artificial Intelligence, Student Evaluation, Identification, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Shilan Shafiei – Language Testing in Asia, 2024
The present study aimed to develop an analytic assessment rubric for the consecutive interpreting course in the educational setting in the Iranian academic context. To this end, the general procedure of rubric development, including data preparation, selection, and refinement, was applied. The performance criteria were categorized into content,…
Descriptors: Scoring Rubrics, Translation, Language Processing, Second Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alexandra Farazouli; Teresa Cerratto-Pargman; Klara Bolander-Laksov; Cormac McGrath – Assessment & Evaluation in Higher Education, 2024
AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination…
Descriptors: Artificial Intelligence, Natural Language Processing, Student Evaluation, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Jiahui Luo – Assessment & Evaluation in Higher Education, 2024
This study offers a critical examination of university policies developed to address recent challenges presented by generative AI (GenAI) to higher education assessment. Drawing on Bacchi's 'What's the problem represented to be' (WPR) framework, we analysed the GenAI policies of 20 world-leading universities to explore what are considered problems…
Descriptors: Artificial Intelligence, Educational Policy, College Students, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Dirk H. R. Spennemann; Jessica Biles; Lachlan Brown; Matthew F. Ireland; Laura Longmore; Clare L. Singh; Anthony Wallis; Catherine Ward – Interactive Technology and Smart Education, 2024
Purpose: The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi can be used to obtain guidance on how to avoid detection when commissioning and submitting contract-written assignments and how workable the offered solutions…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Moriah Ariely; Tanya Nazaretsky; Giora Alexandron – Journal of Research in Science Teaching, 2024
One of the core practices of science is constructing scientific explanations. However, numerous studies have shown that constructing scientific explanations poses significant challenges to students. Proper assessment of scientific explanations is costly and time-consuming, and teachers often do not have a clear definition of the educational goals…
Descriptors: Biology, Automation, Individualized Instruction, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Emerson, Andrew; Min, Wookhee; Azevedo, Roger; Lester, James – British Journal of Educational Technology, 2023
Game-based learning environments hold significant promise for facilitating learning experiences that are both effective and engaging. To support individualised learning and support proactive scaffolding when students are struggling, game-based learning environments should be able to accurately predict student knowledge at early points in students'…
Descriptors: Game Based Learning, Natural Language Processing, Prediction, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Héctor J. Pijeira-Díaz; Shashank Subramanya; Janneke van de Pol; Anique de Bruin – Journal of Computer Assisted Learning, 2024
Background: When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real-time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the…
Descriptors: Learning Analytics, Automation, Student Evaluation, Causal Models
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6