Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 7 |
| Since 2017 (last 10 years) | 10 |
| Since 2007 (last 20 years) | 16 |
Descriptor
Source
Author
| Deane, Paul | 2 |
| Papasalouros, Andreas | 2 |
| Aldabe, Itziar | 1 |
| Badia, Toni | 1 |
| Becker, Kirk A. | 1 |
| Bejar, Isaac I. | 1 |
| Brent A. Stevenor | 1 |
| C. H., Dhawaleswar Rao | 1 |
| Charles Anyanwu | 1 |
| Chatzigiannakou, Maria | 1 |
| Denis Dumas | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Research | 12 |
| Reports - Descriptive | 3 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 2 |
| Dissertations/Theses -… | 1 |
Education Level
| Secondary Education | 4 |
| Higher Education | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Elementary Education | 1 |
| Grade 5 | 1 |
| Grade 7 | 1 |
| Grade 8 | 1 |
| Postsecondary Education | 1 |
Audience
Location
| Alabama | 1 |
| Arizona | 1 |
| Arkansas | 1 |
| Australia | 1 |
| California | 1 |
| China | 1 |
| Connecticut | 1 |
| Georgia | 1 |
| Idaho | 1 |
| Illinois | 1 |
| India | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Remote Associates Test | 1 |
What Works Clearinghouse Rating
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Reima Al-Jarf – Online Submission, 2024
Expressions of impossibility refer to events that can never or rarely happen, tasks that are difficult or impossible to perform, people or things that are of no use and things that are impossible to find. This study explores the similarities and differences between English and Arabic expressions of impossibility, and the difficulties that…
Descriptors: English (Second Language), Second Language Learning, Arabic, Translation
Takshak Desai – ProQuest LLC, 2021
Reading comprehension can be analyzed from three points of view: Semantics, Assessment, and Cognition. Here, Semantics refers to the task of identifying discourse relations in text. Assessment involves utilizing these relations to obtain meaningful question-answer pairs. Cognition means categorizing questions according to their difficulty or…
Descriptors: Reading Comprehension, Semantics, Questioning Techniques, Language Processing
Jonathan Trace – Language Teaching Research Quarterly, 2023
The role of context in cloze tests has long been seen as both a benefit as well as a complication in their usefulness as a measure of second language comprehension (Brown, 2013). Passage cohesion, in particular, would seem to have a relevant and important effect on the degree to which cloze items function and the interpretability of performances…
Descriptors: Language Tests, Cloze Procedure, Connected Discourse, Test Items
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Klein, Ariel; Badia, Toni – Journal of Creative Behavior, 2015
In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…
Descriptors: Language Usage, Semantics, Natural Language Processing, Test Items
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2018
Automatic question generation can help teachers to save the time necessary for constructing examination papers. Several approaches were proposed to automatically generate multiple-choice questions for vocabulary assessment or grammar exercises. However, most of these studies focused on generating questions in English with a certain similarity…
Descriptors: Multiple Choice Tests, Regression (Statistics), Test Items, Natural Language Processing
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…
Descriptors: Scoring, Automation, Essay Tests, Natural Language Processing
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit – Electronic Journal of e-Learning, 2011
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
Descriptors: Test Items, Semantics, Multilingualism, Language Processing
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Deane, Paul; Lawless, René R.; Li, Chen; Sabatini, John; Bejar, Isaac I.; O'Reilly, Tenaha – ETS Research Report Series, 2014
We expect that word knowledge accumulates gradually. This article draws on earlier approaches to assessing depth, but focuses on one dimension: richness of semantic knowledge. We present results from a study in which three distinct item types were developed at three levels of depth: knowledge of common usage patterns, knowledge of broad topical…
Descriptors: Vocabulary, Test Items, Language Tests, Semantics
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
