Publication Date
In 2025 | 5 |
Since 2024 | 21 |
Since 2021 (last 5 years) | 95 |
Since 2016 (last 10 years) | 161 |
Since 2006 (last 20 years) | 221 |
Descriptor
Automation | 239 |
Scoring | 239 |
Computer Assisted Testing | 90 |
Essays | 80 |
Artificial Intelligence | 56 |
Writing Evaluation | 48 |
Natural Language Processing | 45 |
Feedback (Response) | 39 |
Scores | 39 |
Foreign Countries | 34 |
Models | 32 |
More ▼ |
Source
Author
Attali, Yigal | 9 |
McNamara, Danielle S. | 9 |
Williamson, David M. | 7 |
Bejar, Isaac I. | 6 |
Crossley, Scott A. | 6 |
Danielle S. McNamara | 6 |
Zhang, Mo | 6 |
Allen, Laura K. | 5 |
Linn, Marcia C. | 5 |
Litman, Diane | 5 |
Liu, Ou Lydia | 5 |
More ▼ |
Publication Type
Education Level
Audience
Administrators | 1 |
Researchers | 1 |
Location
China | 12 |
California | 3 |
Japan | 3 |
South Korea | 3 |
Taiwan | 3 |
Brazil | 2 |
Canada | 2 |
Oregon | 2 |
Texas (Houston) | 2 |
West Virginia | 2 |
Afghanistan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Akif Avcu – Malaysian Online Journal of Educational Technology, 2025
This scope-review presents the milestones of how Hierarchical Rater Models (HRMs) become operable to used in automated essay scoring (AES) to improve instructional evaluation. Although essay evaluations--a useful instrument for evaluating higher-order cognitive abilities--have always depended on human raters, concerns regarding rater bias,…
Descriptors: Automation, Scoring, Models, Educational Assessment
Benjamin Goecke; Paul V. DiStefano; Wolfgang Aschauer; Kurt Haim; Roger Beaty; Boris Forthmann – Journal of Creative Behavior, 2024
Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses…
Descriptors: Creativity, Creative Thinking, Scoring, Automation
Ramnarain-Seetohul, Vidasha; Bassoo, Vandana; Rosunally, Yasmine – Education and Information Technologies, 2022
In automated essay scoring (AES) systems, similarity techniques are used to compute the score for student answers. Several methods to compute similarity have emerged over the years. However, only a few of them have been widely used in the AES domain. This work shows the findings of a ten-year review on similarity techniques applied in AES systems…
Descriptors: Computer Assisted Testing, Essays, Scoring, Automation
Ferrara, Steve; Qunbar, Saed – Journal of Educational Measurement, 2022
In this article, we argue that automated scoring engines should be transparent and construct relevant--that is, as much as is currently feasible. Many current automated scoring engines cannot achieve high degrees of scoring accuracy without allowing in some features that may not be easily explained and understood and may not be obviously and…
Descriptors: Artificial Intelligence, Scoring, Essays, Automation
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Zesch, Torsten; Horbach, Andrea; Zehner, Fabian – Educational Measurement: Issues and Practice, 2023
In this article, we systematize the factors influencing performance and feasibility of automatic content scoring methods for short text responses. We argue that performance (i.e., how well an automatic system agrees with human judgments) mainly depends on the linguistic variance seen in the responses and that this variance is indirectly influenced…
Descriptors: Influences, Academic Achievement, Feasibility Studies, Automation
Abbas, Mohsin; van Rosmalen, Peter; Kalz, Marco – IEEE Transactions on Learning Technologies, 2023
For predicting and improving the quality of essays, text analytic metrics (surface, syntactic, morphological, and semantic features) can be used to provide formative feedback to the students in higher education. In this study, the goal was to identify a sufficient number of features that exhibit a fair proxy of the scores given by the human raters…
Descriptors: Feedback (Response), Automation, Essays, Scoring
Firoozi, Tahereh; Mohammadi, Hamid; Gierl, Mark J. – Educational Measurement: Issues and Practice, 2023
Research on Automated Essay Scoring has become increasing important because it serves as a method for evaluating students' written responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The…
Descriptors: Active Learning, Automation, Scoring, Essays
Fan Zhang; Xiangyu Wang; Xinhong Zhang – Education and Information Technologies, 2025
Intersection of education and deep learning method of artificial intelligence (AI) is gradually becoming a hot research field. Education will be profoundly transformed by AI. The purpose of this review is to help education practitioners understand the research frontiers and directions of AI applications in education. This paper reviews the…
Descriptors: Learning Processes, Artificial Intelligence, Technology Uses in Education, Educational Research
Ragheb Al-Ghezi; Katja Voskoboinik; Yaroslav Getman; Anna Von Zansen; Heini Kallio; Mikko Kurimo; Ari Huhta; Raili Hildén – Language Assessment Quarterly, 2023
The development of automated systems for evaluating spontaneous speech is desirable for L2 learning, as it can be used as a facilitating tool for self-regulated learning, language proficiency assessment, and teacher training programs. However, languages with fewer learners face challenges due to the scarcity of training data. Recent advancements…
Descriptors: Speech Tests, Automation, Artificial Intelligence, Finno Ugric Languages
Ngoc My Bui; Jessie S. Barrot – Education and Information Technologies, 2025
With the generative artificial intelligence (AI) tool's remarkable capabilities in understanding and generating meaningful content, intriguing questions have been raised about its potential as an automated essay scoring (AES) system. One such tool is ChatGPT, which is capable of scoring any written work based on predefined criteria. However,…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Héctor J. Pijeira-Díaz; Sophia Braumann; Janneke van de Pol; Tamara van Gog; Anique B. H. Bruin – British Journal of Educational Technology, 2024
Advances in computational language models increasingly enable adaptive support for self-regulated learning (SRL) in digital learning environments (DLEs; eg, via automated feedback). However, the accuracy of those models is a common concern for educational stakeholders (eg, policymakers, researchers, teachers and learners themselves). We compared…
Descriptors: Computational Linguistics, Independent Study, Secondary School Students, Causal Models
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Ren, Ping; Yang, Liu; Luo, Fang – Education and Information Technologies, 2023
Student feedback is crucial for evaluating the performance of teachers and the quality of teaching. Free-form text comments obtained from open-ended questions are seldom analyzed comprehensively since it is difficult to interpret and score compared to standardized rating scales. To solve this problem, the present study employed aspect-level…
Descriptors: Student Attitudes, Student Evaluation of Teacher Performance, Feedback (Response), Prediction
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring