Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 5 |
Descriptor
Algorithms | 5 |
Test Format | 5 |
Student Evaluation | 3 |
Automation | 2 |
Comparative Testing | 2 |
Item Response Theory | 2 |
Multiple Choice Tests | 2 |
Test Items | 2 |
Test Reliability | 2 |
Ability Identification | 1 |
Adaptive Testing | 1 |
More ▼ |
Source
Journal of Educational and… | 2 |
Education and Information… | 1 |
International Journal of… | 1 |
Journal of Biological… | 1 |
Author
Chiu, Chia-Yi | 1 |
Daniela S.M. Pereira | 1 |
Filipe Manuel Vidal Falcão | 1 |
José Miguel Pêgo | 1 |
Köhn, Hans Friedrich | 1 |
Larissa Kirschner | 1 |
Lo, Stanley M. | 1 |
Luping Niu | 1 |
Patrício Costa | 1 |
Seung W. Choi | 1 |
Sung, Rou-Jia | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Portugal | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Filipe Manuel Vidal Falcão; Daniela S.M. Pereira; José Miguel Pêgo; Patrício Costa – Education and Information Technologies, 2024
Progress tests (PT) are a popular type of longitudinal assessment used for evaluating clinical knowledge retention and long-life learning in health professions education. Most PTs consist of multiple-choice questions (MCQs) whose development is costly and time-consuming. Automatic Item Generation (AIG) generates test items through algorithms,…
Descriptors: Automation, Test Items, Progress Monitoring, Medical Education
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Sung, Rou-Jia; Swarat, Su L.; Lo, Stanley M. – Journal of Biological Education, 2022
Exams constitute the predominant form of summative assessment in undergraduate biology education, with the assumption that exam performance should reflect student conceptual understanding. Previous work highlights multiple examples in which students can answer exam problems correctly without the corresponding conceptual understanding. This…
Descriptors: Biology, Problem Solving, Undergraduate Students, Scientific Concepts