NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers4
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 82 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Derwin Suhartono; Muhammad Rizki Nur Majiid; Renaldy Fredyan – Education and Information Technologies, 2024
Exam evaluations are essential to assessing students' knowledge and progress in a subject or course. To meet learning objectives and assess student performance, questions must be themed. Automatic Question Generation (AQG) is our novel approach to this problem. A comprehensive process for autonomously generating Bahasa Indonesia text questions is…
Descriptors: Foreign Countries, Computational Linguistics, Computer Software, Questioning Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kassie A. Cigliana; Tom Gray; George Gower – Research in Learning Technology, 2024
An objective structured clinical examination (OSCE) has been recognised as a reliable but workload-intensive assessment method across health sciences studies. Though a variety of digital marking tools have been employed to improve marking and feedback provision for OSCEs, many of these require specialist software or maintenance. This pilot study…
Descriptors: Grading, Feedback (Response), Evaluation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Christophe O. Soulage; Fabien Van Coppenolle; Fitsum Guebre-Egziabher – Advances in Physiology Education, 2024
Artificial intelligence (AI) has gained massive interest with the public release of the conversational AI "ChatGPT," but it also has become a matter of concern for academia as it can easily be misused. We performed a quantitative evaluation of the performance of ChatGPT on a medical physiology university examination. Forty-one answers…
Descriptors: Medical Students, Medical Education, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdul Haris Rosyidi; Yurizka Melia Sari; Dini Kinati Fardah; Masriyah Masriyah – Journal of Education and Learning (EduLearn), 2024
Mathematics education is looking for innovative methods to foster problem-solving skills in students. This research develops a problem-solving assessment using GeoGebra Classroom, a versatile interactive mathematics software, to revolutionize mathematics formative assessment and improve students' problem-solving skills. This study adopted the…
Descriptors: Mathematics Education, Mathematics Instruction, Teaching Methods, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Peer reviewed Peer reviewed
Direct linkDirect link
Ha Nguyen; Jake Hayward – Journal of Science Education and Technology, 2025
High-quality science assessments are multi-dimensional. They promote disciplinary practices, core ideas, cross-cutting concepts, and science sense-making. In this paper, we investigate the feasibility of using generative artificial intelligence (GenAI), specifically multimodal large language models (MLLMs), to annotate and provide improvement…
Descriptors: Science Tests, Criticism, Artificial Intelligence, Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yurtcu, Meltem; Güzeller, Cem Oktay – Participatory Educational Research, 2021
The items that are suitable for everyone's own ability level with the support of computer programs instead of paper and pencil tests may help students to reach more accurate results. Computer adaptive tests (CAT), which are developed based on certain assumptions in this direction, are to create an optimum test for every person taking the exam. It…
Descriptors: Bibliometrics, Computer Assisted Testing, Computer Software, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Harun Bayer; Fazilet Gül Ince Araci; Gülsah Gürkan – International Journal of Technology in Education and Science, 2024
The rapid advancement of artificial intelligence technologies, their pervasive use in every field, and the growing understanding of the benefits they bring have led actors in the education sector to pursue research in this field. In particular, the use of artificial intelligence tools has become more prevalent in the education sector due to the…
Descriptors: Artificial Intelligence, Computer Software, Computational Linguistics, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Rao, Dhawaleswar; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2020
Automatic multiple choice question (MCQ) generation from a text is a popular research area. MCQs are widely accepted for large-scale assessment in various domains and applications. However, manual generation of MCQs is expensive and time-consuming. Therefore, researchers have been attracted toward automatic MCQ generation since the late 90's.…
Descriptors: Multiple Choice Tests, Test Construction, Automation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tack, Anaïs; Piech, Chris – International Educational Data Mining Society, 2022
How can we test whether state-of-the-art generative models, such as Blender and GPT-3, are good AI teachers, capable of replying to a student in an educational dialogue? Designing an AI teacher test is challenging: although evaluation methods are much-needed, there is no off-the-shelf solution to measuring pedagogical ability. This paper reports…
Descriptors: Artificial Intelligence, Dialogs (Language), Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mohammed, Aisha; Dawood, Abdul Kareem Shareef; Alghazali, Tawfeeq; Kadhim, Qasim Khlaif; Sabti, Ahmed Abdulateef; Sabit, Shaker Holh – International Journal of Language Testing, 2023
Cognitive diagnostic models (CDMs) have received much interest within the field of language testing over the last decade due to their great potential to provide diagnostic feedback to all stakeholders and ultimately improve language teaching and learning. A large number of studies have demonstrated the application of CDMs on advanced large-scale…
Descriptors: Reading Comprehension, Reading Tests, Language Tests, English (Second Language)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6