Publication Date
In 2025 | 1 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 17 |
Since 2016 (last 10 years) | 22 |
Since 2006 (last 20 years) | 33 |
Descriptor
Source
Author
Deane, Paul | 2 |
Futagi, Yoko | 2 |
Goldhammer, Frank | 2 |
Saha, Sujan Kumar | 2 |
Sälzer, Christine | 2 |
Zehner, Fabian | 2 |
Aldabe, Itziar | 1 |
Amir Hadifar | 1 |
Anna Lucia Paoletti | 1 |
Araya, Roberto | 1 |
Ayaka Sugawara | 1 |
More ▼ |
Publication Type
Journal Articles | 34 |
Reports - Research | 22 |
Reports - Evaluative | 7 |
Reports - Descriptive | 4 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 7 |
Secondary Education | 6 |
Postsecondary Education | 5 |
Elementary Education | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Early Childhood Education | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
More ▼ |
Audience
Location
Germany | 3 |
Alabama | 1 |
Arizona | 1 |
Arkansas | 1 |
Australia | 1 |
California | 1 |
China | 1 |
Connecticut | 1 |
Georgia | 1 |
Idaho | 1 |
Illinois | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Graduate Record Examinations | 1 |
Remote Associates Test | 1 |
What Works Clearinghouse Rating
Po-Chun Huang; Ying-Hong Chan; Ching-Yu Yang; Hung-Yuan Chen; Yao-Chung Fan – IEEE Transactions on Learning Technologies, 2024
Question generation (QG) task plays a crucial role in adaptive learning. While significant QG performance advancements are reported, the existing QG studies are still far from practical usage. One point that needs strengthening is to consider the generation of question group, which remains untouched. For forming a question group, intrafactors…
Descriptors: Automation, Test Items, Computer Assisted Testing, Test Construction
A Method for Generating Course Test Questions Based on Natural Language Processing and Deep Learning
Hei-Chia Wang; Yu-Hung Chiang; I-Fan Chen – Education and Information Technologies, 2024
Assessment is viewed as an important means to understand learners' performance in the learning process. A good assessment method is based on high-quality examination questions. However, generating high-quality examination questions manually by teachers is a time-consuming task, and it is not easy for students to obtain question banks. To solve…
Descriptors: Natural Language Processing, Test Construction, Test Items, Models
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Sharma, Harsh; Mathur, Rohan; Chintala, Tejas; Dhanalakshmi, Samiappan; Senthil, Ramalingam – Education and Information Technologies, 2023
Examination assessments undertaken by educational institutions are pivotal since it is one of the fundamental steps to determining students' understanding and achievements for a distinct subject or course. Questions must be framed on the topics to meet the learning objectives and assess the student's capability in a particular subject. The…
Descriptors: Taxonomy, Student Evaluation, Test Items, Questioning Techniques
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Baldwin, Peter; Yaneva, Victoria; Mee, Janet; Clauser, Brian E.; Ha, Le An – Journal of Educational Measurement, 2021
In this article, it is shown how item text can be represented by (a) 113 features quantifying the text's linguistic characteristics, (b) 16 measures of the extent to which an information-retrieval-based automatic question-answering system finds an item challenging, and (c) through dense word representations (word embeddings). Using a random…
Descriptors: Natural Language Processing, Prediction, Item Response Theory, Reaction Time
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Cole, Brian S.; Lima-Walton, Elia; Brunnert, Kim; Vesey, Winona Burt; Raha, Kaushik – Journal of Applied Testing Technology, 2020
Automatic item generation can rapidly generate large volumes of exam items, but this creates challenges for assembly of exams which aim to include syntactically diverse items. First, we demonstrate a diminishing marginal syntactic return for automatic item generation using a saturation detection approach. This analysis can help users of automatic…
Descriptors: Artificial Intelligence, Automation, Test Construction, Test Items
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation