Publication Date
In 2025 | 2 |
Since 2024 | 22 |
Since 2021 (last 5 years) | 44 |
Since 2016 (last 10 years) | 54 |
Since 2006 (last 20 years) | 66 |
Descriptor
Natural Language Processing | 67 |
Student Evaluation | 67 |
Artificial Intelligence | 38 |
Computer Assisted Testing | 27 |
Feedback (Response) | 24 |
Foreign Countries | 24 |
Technology Uses in Education | 20 |
Automation | 17 |
College Students | 16 |
Evaluation Methods | 16 |
Models | 13 |
More ▼ |
Source
Author
Allen, Laura K. | 4 |
Danielle S. McNamara | 2 |
Eshuis, Jannes | 2 |
Giesbers, Bas | 2 |
Jordan, Sally | 2 |
Koper, Rob | 2 |
McNamara, Danielle S. | 2 |
Rod D. Roscoe | 2 |
Waterink, Wim | 2 |
Ying Fang | 2 |
van Bruggen, Jan | 2 |
More ▼ |
Publication Type
Education Level
Audience
Administrators | 1 |
Researchers | 1 |
Students | 1 |
Teachers | 1 |
Location
Netherlands | 5 |
Spain | 5 |
United Kingdom | 4 |
Australia | 3 |
Turkey | 3 |
Asia | 2 |
Brazil | 2 |
China | 2 |
Egypt | 2 |
Finland | 2 |
Germany | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 2 |
Test of English as a Foreign… | 2 |
Michigan Test of English… | 1 |
Program for International… | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Mike Perkins; Jasper Roe; Darius Postma; James McGaughran; Don Hickerson – Journal of Academic Ethics, 2024
This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI's ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors…
Descriptors: Artificial Intelligence, Student Evaluation, Identification, Natural Language Processing
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Sharma, Harsh; Mathur, Rohan; Chintala, Tejas; Dhanalakshmi, Samiappan; Senthil, Ramalingam – Education and Information Technologies, 2023
Examination assessments undertaken by educational institutions are pivotal since it is one of the fundamental steps to determining students' understanding and achievements for a distinct subject or course. Questions must be framed on the topics to meet the learning objectives and assess the student's capability in a particular subject. The…
Descriptors: Taxonomy, Student Evaluation, Test Items, Questioning Techniques
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Margaret Bearman; Joanna Tai; Phillip Dawson; David Boud; Rola Ajjawi – Assessment & Evaluation in Higher Education, 2024
Generative artificial intelligence (AI) has rapidly increased capacity for producing textual, visual and auditory outputs, yet there are ongoing concerns regarding the quality of those outputs. There is an urgent need to develop students' evaluative judgement - the capability to judge the quality of work of self and others - in recognition of this…
Descriptors: Evaluative Thinking, Skill Development, Artificial Intelligence, Technology Uses in Education
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Matthew Landers – Higher Education for the Future, 2025
This article presents a brief overview of the state-of-the-art in large language models (LLMs) like ChatGPT and discusses the difficulties that these technologies create for educators with regard to assessment. Making use of the 'arms race' metaphor, this article argues that there are no simple solutions to the 'AI problem'. Rather, this author…
Descriptors: Ethics, Cheating, Plagiarism, Artificial Intelligence
Fan Ouyang; Tuan Anh Dinh; Weiqi Xu – Journal for STEM Education Research, 2023
Artificial intelligence (AI), as an emerging technology, has been widely used in STEM education to promote the educational assessment. Although AI-driven educational assessment has the potential to assess students' learning automatically and reduce the workload of instructors, there is still a lack of review works to holistically examine the field…
Descriptors: Educational Assessment, Artificial Intelligence, STEM Education, Academic Achievement
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Alexandra Farazouli; Teresa Cerratto-Pargman; Klara Bolander-Laksov; Cormac McGrath – Assessment & Evaluation in Higher Education, 2024
AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination…
Descriptors: Artificial Intelligence, Natural Language Processing, Student Evaluation, Educational Change
Jiahui Luo – Assessment & Evaluation in Higher Education, 2024
This study offers a critical examination of university policies developed to address recent challenges presented by generative AI (GenAI) to higher education assessment. Drawing on Bacchi's 'What's the problem represented to be' (WPR) framework, we analysed the GenAI policies of 20 world-leading universities to explore what are considered problems…
Descriptors: Artificial Intelligence, Educational Policy, College Students, Student Evaluation
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)