Publication Date
In 2025 | 28 |
Since 2024 | 79 |
Since 2021 (last 5 years) | 149 |
Since 2016 (last 10 years) | 170 |
Since 2006 (last 20 years) | 193 |
Descriptor
Artificial Intelligence | 210 |
Computer Assisted Testing | 210 |
Foreign Countries | 62 |
Computer Software | 51 |
Scoring | 45 |
Educational Technology | 44 |
Automation | 41 |
Technology Uses in Education | 40 |
Evaluation Methods | 35 |
Student Evaluation | 34 |
Feedback (Response) | 33 |
More ▼ |
Source
Author
Publication Type
Education Level
Location
Turkey | 8 |
China | 7 |
Australia | 6 |
Spain | 6 |
Indonesia | 5 |
Japan | 5 |
South Africa | 5 |
Taiwan | 5 |
United Kingdom | 5 |
Canada | 4 |
Europe | 4 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Jinshui Wang; Shuguang Chen; Zhengyi Tang; Pengchen Lin; Yupeng Wang – Education and Information Technologies, 2025
Mastering SQL programming skills is fundamental in computer science education, and Online Judging Systems (OJS) play a critical role in automatically assessing SQL codes, improving the accuracy and efficiency of evaluations. However, these systems are vulnerable to manipulation by students who can submit "cheating codes" that pass the…
Descriptors: Programming, Computer Science Education, Cheating, Computer Assisted Testing
Chen, Jennifer J.; Perez, ChareMone' – Childhood Education, 2023
Assessment holds the key to unlocking for the teacher a child's past (what he already knows), present (what he is learning), and future (what he still needs to learn) to inform teaching. Despite the benefits of assessment for informing teaching practice and enhancing student learning, it remains one of the most challenging and time-consuming tasks…
Descriptors: Evaluation Methods, Individualized Instruction, Artificial Intelligence, Computer Assisted Testing

Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Ishaya Gambo; Faith-Jane Abegunde; Omobola Gambo; Roseline Oluwaseun Ogundokun; Akinbowale Natheniel Babatunde; Cheng-Chi Lee – Education and Information Technologies, 2025
The current educational system relies heavily on manual grading, posing challenges such as delayed feedback and grading inaccuracies. Automated grading tools (AGTs) offer solutions but come with limitations. To address this, "GRAD-AI" is introduced, an advanced AGT that combines automation with teacher involvement for precise grading,…
Descriptors: Automation, Grading, Artificial Intelligence, Computer Assisted Testing
Ingrisone, Soo Jeong; Ingrisone, James N. – Educational Measurement: Issues and Practice, 2023
There has been a growing interest in approaches based on machine learning (ML) for detecting test collusion as an alternative to the traditional methods. Clustering analysis under an unsupervised learning technique appears especially promising to detect group collusion. In this study, the effectiveness of hierarchical agglomerative clustering…
Descriptors: Identification, Cooperation, Computer Assisted Testing, Artificial Intelligence
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Jian Zhao; Elaine Chapman; Peyman G. P. Sabet – Education Research and Perspectives, 2024
The launch of ChatGPT and the rapid proliferation of generative AI (GenAI) have brought transformative changes to education, particularly in the field of assessment. This has prompted a fundamental rethinking of traditional assessment practices, presenting both opportunities and challenges in evaluating student learning. While numerous studies…
Descriptors: Literature Reviews, Artificial Intelligence, Evaluation Methods, Student Evaluation
Jyoti Prakash Meher; Rajib Mall – IEEE Transactions on Education, 2025
Contribution: This article suggests a novel method for diagnosing a learner's cognitive proficiency using deep neural networks (DNNs) based on her answers to a series of questions. The outcome of the forecast can be used for adaptive assistance. Background: Often a learner spends considerable amounts of time in attempting questions on the concepts…
Descriptors: Cognitive Ability, Assistive Technology, Adaptive Testing, Computer Assisted Testing
Salvatore G. Garofalo; Stephen J. Farenga – Science & Education, 2025
The purpose of this study was to gauge the attitudes towards artificial intelligence (AI) use in the science classroom by science teachers at the start of generative AI chatbot popularity (March 2023). The lens of distributed cognition afforded an opportunity to gather thoughts, opinions, and perceptions from 24 secondary science educators as well…
Descriptors: Secondary School Teachers, Science Teachers, Teacher Attitudes, Artificial Intelligence
Tay McEdwards; Greta R. Underhill – Online Journal of Distance Learning Administration, 2025
Online learning has steadily increased since well before the COVID-19 pandemic (Seaman et al., 2018), but research has yet to explore online students' perceptions of online exam proctoring methods. The purpose of this exploratory study was to understand the perceptions of fully online students regarding types of proctoring at a large state…
Descriptors: Supervision, Computer Assisted Testing, Electronic Learning, Student Attitudes
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs