Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
IEEE Transactions on Learning… | 2 |
British Journal of… | 1 |
Education and Information… | 1 |
Electronic Journal of… | 1 |
Grantee Submission | 1 |
Journal of Applied Testing… | 1 |
Journal of Educational… | 1 |
Author
Alice Ng | 1 |
Amir Hadifar | 1 |
Baldwin, Peter | 1 |
Brunnert, Kim | 1 |
Burstein, Jill C. | 1 |
Chintala, Tejas | 1 |
Chris Develder | 1 |
Clauser, Brian E. | 1 |
Cole, Brian S. | 1 |
Dhanalakshmi, Samiappan | 1 |
Gutl, Christian | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 9 |
Journal Articles | 7 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 2 |
Elementary Secondary Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sharma, Harsh; Mathur, Rohan; Chintala, Tejas; Dhanalakshmi, Samiappan; Senthil, Ramalingam – Education and Information Technologies, 2023
Examination assessments undertaken by educational institutions are pivotal since it is one of the fundamental steps to determining students' understanding and achievements for a distinct subject or course. Questions must be framed on the topics to meet the learning objectives and assess the student's capability in a particular subject. The…
Descriptors: Taxonomy, Student Evaluation, Test Items, Questioning Techniques
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Baldwin, Peter; Yaneva, Victoria; Mee, Janet; Clauser, Brian E.; Ha, Le An – Journal of Educational Measurement, 2021
In this article, it is shown how item text can be represented by (a) 113 features quantifying the text's linguistic characteristics, (b) 16 measures of the extent to which an information-retrieval-based automatic question-answering system finds an item challenging, and (c) through dense word representations (word embeddings). Using a random…
Descriptors: Natural Language Processing, Prediction, Item Response Theory, Reaction Time
Cole, Brian S.; Lima-Walton, Elia; Brunnert, Kim; Vesey, Winona Burt; Raha, Kaushik – Journal of Applied Testing Technology, 2020
Automatic item generation can rapidly generate large volumes of exam items, but this creates challenges for assembly of exams which aim to include syntactically diverse items. First, we demonstrate a diminishing marginal syntactic return for automatic item generation using a saturation detection approach. This analysis can help users of automatic…
Descriptors: Artificial Intelligence, Automation, Test Construction, Test Items

Sami Baral; Li Lucy; Ryan Knight; Alice Ng; Luca Soldaini; Neil T. Heffernan; Kyle Lo – Grantee Submission, 2024
In real-world settings, vision language models (VLMs) should robustly handle naturalistic, noisy visual content as well as domain-specific language and concepts. For example, K-12 educators using digital learning platforms may need to examine and provide feedback across many images of students' math work. To assess the potential of VLMs to support…
Descriptors: Visual Learning, Visual Perception, Natural Language Processing, Freehand Drawing
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit – Electronic Journal of e-Learning, 2011
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
Descriptors: Test Items, Semantics, Multilingualism, Language Processing
Jordan, Sally; Mitchell, Tom – British Journal of Educational Technology, 2009
A natural language based system has been used to author and mark short-answer free-text assessment tasks. Students attempt the questions online and are given tailored and relatively detailed feedback on incorrect and incomplete responses, and have the opportunity to repeat the task immediately so as to learn from the feedback provided. The answer…
Descriptors: Feedback (Response), Test Items, Natural Language Processing, Teaching Methods
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing