ERIC Number: ED630037
Record Type: Non-Journal
Publication Date: 2023-Jul-7
Pages: 19
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Generating Multiple Choice Questions from a Textbook: LLMs Match Human Performance on Most Metrics
Grantee Submission, Paper presented at AIEDLLM1: Empowering Education with LLMs (Tokyo, Japan, Jul 7, 2023.)
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and human-authored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM's performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance. [This paper was published in the "CEUR Workshop Proceedings," 2023.]
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED); National Science Foundation (NSF)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305A190448; 1918751; 1934745
Author Affiliations: N/A