Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Artificial Intelligence | 2 |
Models | 2 |
Natural Language Processing | 2 |
Science Tests | 2 |
Test Items | 2 |
Algorithms | 1 |
Anatomy | 1 |
Coding | 1 |
College Science | 1 |
Elementary School Students | 1 |
Energy | 1 |
More ▼ |
Author
Andrew M. Olney | 1 |
Bohm, Isabell | 1 |
Di Mitri, Daniele | 1 |
Drachsler, Hendrik | 1 |
Gombert, Sebastian | 1 |
Grimm, Adrian | 1 |
Karademir, Onur | 1 |
Kolbe, Hannah | 1 |
Kubsch, Marcus | 1 |
Neumann, Knut | 1 |
Tautz, Simon | 1 |
More ▼ |
Publication Type
Reports - Research | 2 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation