Publication Date
In 2025 | 2 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 20 |
Descriptor
Computer Software | 23 |
Multiple Choice Tests | 23 |
Test Items | 23 |
Item Analysis | 12 |
Test Construction | 11 |
Computer Assisted Testing | 10 |
Foreign Countries | 8 |
Comparative Analysis | 6 |
Difficulty Level | 6 |
Accuracy | 5 |
Artificial Intelligence | 5 |
More ▼ |
Source
Author
Adeleke, A. A. | 1 |
Alexander Kah | 1 |
Almeida, Daniela | 1 |
Alsubait, Tahani | 1 |
Anna Lucia Paoletti | 1 |
Ayaka Sugawara | 1 |
Bennett, Randy Elliot | 1 |
Costa, Joana Martinho | 1 |
Cronje, Johannes C. | 1 |
Denny, Paul | 1 |
Donatella Firmani | 1 |
More ▼ |
Publication Type
Journal Articles | 18 |
Reports - Research | 17 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 3 |
Reports - Evaluative | 2 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 6 |
Secondary Education | 1 |
Audience
Location
Albania | 1 |
Italy | 1 |
Japan | 1 |
Nigeria | 1 |
Portugal | 1 |
South Africa | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
What Works Clearinghouse Rating
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Marli Crabtree; Kenneth L. Thompson; Ellen M. Robertson – HAPS Educator, 2024
Research has suggested that changing one's answer on multiple-choice examinations is more likely to lead to positive academic outcomes. This study aimed to further understand the relationship between changing answer selections and item attributes, student performance, and time within a population of 158 first-year medical students enrolled in a…
Descriptors: Anatomy, Science Tests, Medical Students, Medical Education
Kyeng Gea Lee; Mark J. Lee; Soo Jung Lee – International Journal of Technology in Education and Science, 2024
Online assessment is an essential part of online education, and if conducted properly, has been found to effectively gauge student learning. Generally, textbased questions have been the cornerstone of online assessment. Recently, however, the emergence of generative artificial intelligence has added a significant challenge to the integrity of…
Descriptors: Artificial Intelligence, Computer Software, Biology, Science Instruction
Emery-Wetherell, Meaghan; Wang, Ruoyao – Assessment & Evaluation in Higher Education, 2023
Over four semesters of a large introductory statistics course the authors found students were engaging in contract cheating on Chegg.com during multiple choice examinations. In this paper we describe our methodology for identifying, addressing and eventually eliminating cheating. We successfully identified 23 out of 25 students using a combination…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Cheating, Identification
Harun Bayer; Fazilet Gül Ince Araci; Gülsah Gürkan – International Journal of Technology in Education and Science, 2024
The rapid advancement of artificial intelligence technologies, their pervasive use in every field, and the growing understanding of the benefits they bring have led actors in the education sector to pursue research in this field. In particular, the use of artificial intelligence tools has become more prevalent in the education sector due to the…
Descriptors: Artificial Intelligence, Computer Software, Computational Linguistics, Technology Uses in Education
Rao, Dhawaleswar; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2020
Automatic multiple choice question (MCQ) generation from a text is a popular research area. MCQs are widely accepted for large-scale assessment in various domains and applications. However, manual generation of MCQs is expensive and time-consuming. Therefore, researchers have been attracted toward automatic MCQ generation since the late 90's.…
Descriptors: Multiple Choice Tests, Test Construction, Automation, Computer Software
Snow, Stephen; Wilde, Adriana; Denny, Paul; schraefel, m. c. – British Journal of Educational Technology, 2019
Peer-learning that engages students in multiple choice question (MCQ) formulation promotes higher task engagement and deeper learning than simply answering MCQ's in summative assessment. Yet presently, the literature detailing deployments of student-authored MCQ software is biased towards accounts from Science, Technology, Engineering, Maths and…
Descriptors: Student Developed Materials, Multiple Choice Tests, Computer Software, Cooperative Learning
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests
Moro, Sérgio; Martins, António; Ramos, Pedro; Esmerado, Joaquim; Costa, Joana Martinho; Almeida, Daniela – Computers in the Schools, 2020
Many university programs include Microsoft Excel courses given their value as a scientific and technical tool. However, evaluating what is effectively learned by students is a challenging task. Considering multiple-choice written exams are a standard evaluation format, this study aimed to uncover the features influencing students' success in…
Descriptors: Multiple Choice Tests, Test Items, Spreadsheets, Computer Software
Teneqexhi, Romeo; Kuneshka, Loreta; Naço, Adrian – International Association for Development of the Information Society, 2018
Organizing exams or competitions with multiple choice questions and assessment by technology today is something that happens in many educational institutions around the world. These kinds of exams or tests as a rule are done by answering questions in a so-called answer sheet form. In this form, each student or participant in the exam is obliged to…
Descriptors: Foreign Countries, Competition, Multiple Choice Tests, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1 | 2