Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Assisted Testing | 6 |
Semantics | 6 |
Test Construction | 6 |
Test Items | 5 |
Difficulty Level | 3 |
Item Analysis | 3 |
Multiple Choice Tests | 3 |
Comparative Analysis | 2 |
Educational Technology | 2 |
Foreign Countries | 2 |
Internet | 2 |
More ▼ |
Source
Interactive Learning… | 2 |
IEEE Transactions on Learning… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Author
Aldabe, Itziar | 1 |
Archana Praveen Kumar | 1 |
Ashalatha Nayak | 1 |
Chaitanya | 1 |
Chen, Deng-Jyi | 1 |
Chen, Shu-Ling | 1 |
Federico, Pat-Anthony | 1 |
Kanaris, Konstantinos | 1 |
Kaustav Ghosh | 1 |
Kotis, Konstantinos | 1 |
Lai, Ah-Fur | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 6 | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos – Interactive Learning Environments, 2011
The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…
Descriptors: Semantics, Natural Language Processing, Test Construction, Educational Technology
Lai, Ah-Fur; Chen, Deng-Jyi; Chen, Shu-Ling – Journal of Educational Multimedia and Hypermedia, 2008
The IRT (Item Response Theory) has been studied and applied in computer-based test for decades. However, almost of all these existing studies evaluated focus merely on test questions with text-based (or static text/graphic) type of presentation form illustrated exclusively. In this paper, we present our study on test questions using both…
Descriptors: Elementary School Students, Semantics, Difficulty Level, Item Response Theory
Federico, Pat-Anthony; Liggett, Nina L. – 1989
Seventy-five subjects (Naval F-14 and E-2C crew members) were administered computer-based and paper-based tests of threat-parameter knowledge represented as a semantic network in order to determine the relative reliabilities and validities of these two assessment modes. Estimates of internal consistencies, equivalences, and discriminant validities…
Descriptors: Comparative Analysis, Computer Assisted Testing, Knowledge Level, Military Personnel