Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Andrew M. Olney | 1 |
Ayaka Sugawara | 1 |
Binglin Chen | 1 |
Calders, Toon | 1 |
Conati, Cristina | 1 |
Futagi, Yoko | 1 |
Gierl, Mark J. | 1 |
Gutl, Christian | 1 |
Hemat, Ramin | 1 |
Hofler, Margit | 1 |
Huang, Anna Y. Q. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 6 |
Reports - Evaluative | 2 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 10 |
Postsecondary Education | 7 |
Audience
Location
Australia | 1 |
Japan | 1 |
Netherlands | 1 |
South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Binglin Chen – ProQuest LLC, 2022
Assessment is a key component of education. Routine grading of students' work, however, is time consuming. Automating the grading process allows instructors to spend more of their time helping their students learn and engaging their students with more open-ended, creative activities. One way to automate grading is through computer-based…
Descriptors: College Students, STEM Education, Student Evaluation, Grading
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Rao, Dhawaleswar; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2020
Automatic multiple choice question (MCQ) generation from a text is a popular research area. MCQs are widely accepted for large-scale assessment in various domains and applications. However, manual generation of MCQs is expensive and time-consuming. Therefore, researchers have been attracted toward automatic MCQ generation since the late 90's.…
Descriptors: Multiple Choice Tests, Test Construction, Automation, Computer Software
Lu, Owen H. T.; Huang, Anna Y. Q.; Tsai, Danny C. L.; Yang, Stephen J. H. – Educational Technology & Society, 2021
Human-guided machine learning can improve computing intelligence, and it can accurately assist humans in various tasks. In education research, artificial intelligence (AI) is applicable in many situations, such as predicting students' learning paths and strategies. In this study, we explore the benefits of repetitive practice of short-answer…
Descriptors: Test Items, Artificial Intelligence, Test Construction, Student Evaluation
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit – Electronic Journal of e-Learning, 2011
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
Descriptors: Test Items, Semantics, Multilingualism, Language Processing
Jordan, Sally; Mitchell, Tom – British Journal of Educational Technology, 2009
A natural language based system has been used to author and mark short-answer free-text assessment tasks. Students attempt the questions online and are given tailored and relatively detailed feedback on incorrect and incomplete responses, and have the opportunity to repeat the task immediately so as to learn from the feedback provided. The answer…
Descriptors: Feedback (Response), Test Items, Natural Language Processing, Teaching Methods
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Hemat, Ramin; Zuckerman, Daniel – ETS Research Report Series, 2006
This paper describes the development, implementation, and evaluation of an automated system for predicting the acceptability status of candidate reading-comprehension stimuli extracted from a database of journal and magazine articles. The system uses a combination of classification and regression techniques to predict the probability that a given…
Descriptors: Automation, Prediction, Reading Comprehension, Classification
Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John – International Working Group on Educational Data Mining, 2011
The 4th International Conference on Educational Data Mining (EDM 2011) brings together researchers from computer science, education, psychology, psychometrics, and statistics to analyze large datasets to answer educational research questions. The conference, held in Eindhoven, The Netherlands, July 6-9, 2011, follows the three previous editions…
Descriptors: Academic Achievement, Logical Thinking, Profiles, Tutoring