Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 7 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Natural Language Processing | 7 |
| Artificial Intelligence | 6 |
| Test Items | 4 |
| Accuracy | 3 |
| Identification | 3 |
| Item Banks | 3 |
| Test Construction | 3 |
| Computer Assisted Testing | 2 |
| Efficiency | 2 |
| Multiple Choice Tests | 2 |
| Prediction | 2 |
| More ▼ | |
Source
| Journal of Applied Testing… | 7 |
Author
| Bulut, Okan | 2 |
| Barbosa, Denilson | 1 |
| Becker, Kirk A. | 1 |
| Brent A. Stevenor | 1 |
| Brunnert, Kim | 1 |
| Charles Anyanwu | 1 |
| Chen, Guanliang | 1 |
| Cole, Brian S. | 1 |
| D'Angelo, Jean | 1 |
| Epp, Carrie Demmans | 1 |
| Firoozi, Tahereh | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 5 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Wongvorachan, Tarid; Lai, Ka Wing; Bulut, Okan; Tsai, Yi-Shan; Chen, Guanliang – Journal of Applied Testing Technology, 2022
Feedback is a crucial component of student learning. As advancements in technology have enabled the adoption of digital learning environments with assessment capabilities, the frequency, delivery format, and timeliness of feedback derived from educational assessments have also changed progressively. Advanced technologies powered by Artificial…
Descriptors: Artificial Intelligence, Feedback (Response), Learning Analytics, Natural Language Processing
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
Micir, Ian; Swygert, Kimberly; D'Angelo, Jean – Journal of Applied Testing Technology, 2022
The interpretations of test scores in secure, high-stakes environments are dependent on several assumptions, one of which is that examinee responses to items are independent and no enemy items are included on the same forms. This paper documents the development and implementation of a C#-based application that uses Natural Language Processing…
Descriptors: Artificial Intelligence, Man Machine Systems, Accuracy, Efficiency
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
Cole, Brian S.; Lima-Walton, Elia; Brunnert, Kim; Vesey, Winona Burt; Raha, Kaushik – Journal of Applied Testing Technology, 2020
Automatic item generation can rapidly generate large volumes of exam items, but this creates challenges for assembly of exams which aim to include syntactically diverse items. First, we demonstrate a diminishing marginal syntactic return for automatic item generation using a saturation detection approach. This analysis can help users of automatic…
Descriptors: Artificial Intelligence, Automation, Test Construction, Test Items

Peer reviewed
Direct link
