Publication Date
In 2025 | 5 |
Since 2024 | 18 |
Since 2021 (last 5 years) | 89 |
Since 2016 (last 10 years) | 167 |
Since 2006 (last 20 years) | 267 |
Descriptor
Computer Assisted Testing | 326 |
Scoring | 326 |
Automation | 67 |
Evaluation Methods | 64 |
Foreign Countries | 60 |
Writing Evaluation | 60 |
Essays | 59 |
Second Language Learning | 59 |
Language Tests | 55 |
Scores | 54 |
Computer Software | 52 |
More ▼ |
Source
Author
Attali, Yigal | 9 |
Williamson, David M. | 6 |
Ramineni, Chaitanya | 5 |
Xi, Xiaoming | 5 |
Zechner, Klaus | 5 |
Bejar, Isaac I. | 4 |
Bennett, Randy Elliot | 4 |
Bridgeman, Brent | 4 |
Evanini, Keelan | 4 |
Mulholland, Matthew | 4 |
Rupp, André A. | 4 |
More ▼ |
Publication Type
Education Level
Location
China | 8 |
Australia | 7 |
Japan | 5 |
Netherlands | 5 |
Germany | 4 |
Iran | 4 |
Taiwan | 4 |
Europe | 3 |
United Kingdom | 3 |
California | 2 |
Canada | 2 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Family Educational Rights and… | 1 |
Health Insurance Portability… | 1 |
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Ramnarain-Seetohul, Vidasha; Bassoo, Vandana; Rosunally, Yasmine – Education and Information Technologies, 2022
In automated essay scoring (AES) systems, similarity techniques are used to compute the score for student answers. Several methods to compute similarity have emerged over the years. However, only a few of them have been widely used in the AES domain. This work shows the findings of a ten-year review on similarity techniques applied in AES systems…
Descriptors: Computer Assisted Testing, Essays, Scoring, Automation
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
William Orwig; Emma R. Edenbaum; Joshua D. Greene; Daniel L. Schacter – Journal of Creative Behavior, 2024
Recent developments in computerized scoring via semantic distance have provided automated assessments of verbal creativity. Here, we extend past work, applying computational linguistic approaches to characterize salient features of creative text. We hypothesize that, in addition to semantic diversity, the degree to which a story includes…
Descriptors: Computer Assisted Testing, Scoring, Creativity, Computational Linguistics
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Yuan, Lu; Huang, Yingshi; Li, Shuhang; Chen, Ping – Journal of Educational Measurement, 2023
Online calibration is a key technology for item calibration in computerized adaptive testing (CAT) and has been widely used in various forms of CAT, including unidimensional CAT, multidimensional CAT (MCAT), CAT with polytomously scored items, and cognitive diagnostic CAT. However, as multidimensional and polytomous assessment data become more…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computation, Test Items
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence