NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Wongvorachan, Tarid; Lai, Ka Wing; Bulut, Okan; Tsai, Yi-Shan; Chen, Guanliang – Journal of Applied Testing Technology, 2022
Feedback is a crucial component of student learning. As advancements in technology have enabled the adoption of digital learning environments with assessment capabilities, the frequency, delivery format, and timeliness of feedback derived from educational assessments have also changed progressively. Advanced technologies powered by Artificial…
Descriptors: Artificial Intelligence, Feedback (Response), Learning Analytics, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Micir, Ian; Swygert, Kimberly; D'Angelo, Jean – Journal of Applied Testing Technology, 2022
The interpretations of test scores in secure, high-stakes environments are dependent on several assumptions, one of which is that examinee responses to items are independent and no enemy items are included on the same forms. This paper documents the development and implementation of a C#-based application that uses Natural Language Processing…
Descriptors: Artificial Intelligence, Man Machine Systems, Accuracy, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Cole, Brian S.; Lima-Walton, Elia; Brunnert, Kim; Vesey, Winona Burt; Raha, Kaushik – Journal of Applied Testing Technology, 2020
Automatic item generation can rapidly generate large volumes of exam items, but this creates challenges for assembly of exams which aim to include syntactically diverse items. First, we demonstrate a diminishing marginal syntactic return for automatic item generation using a saturation detection approach. This analysis can help users of automatic…
Descriptors: Artificial Intelligence, Automation, Test Construction, Test Items