NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Information Analyses12
Journal Articles11
Reports - Evaluative3
Reports - Research1
Audience
Location
Canada1
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign…1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Akif Avcu – Malaysian Online Journal of Educational Technology, 2025
This scope-review presents the milestones of how Hierarchical Rater Models (HRMs) become operable to used in automated essay scoring (AES) to improve instructional evaluation. Although essay evaluations--a useful instrument for evaluating higher-order cognitive abilities--have always depended on human raters, concerns regarding rater bias,…
Descriptors: Automation, Scoring, Models, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Ramnarain-Seetohul, Vidasha; Bassoo, Vandana; Rosunally, Yasmine – Education and Information Technologies, 2022
In automated essay scoring (AES) systems, similarity techniques are used to compute the score for student answers. Several methods to compute similarity have emerged over the years. However, only a few of them have been widely used in the AES domain. This work shows the findings of a ten-year review on similarity techniques applied in AES systems…
Descriptors: Computer Assisted Testing, Essays, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Fan Zhang; Xiangyu Wang; Xinhong Zhang – Education and Information Technologies, 2025
Intersection of education and deep learning method of artificial intelligence (AI) is gradually becoming a hot research field. Education will be profoundly transformed by AI. The purpose of this review is to help education practitioners understand the research frontiers and directions of AI applications in education. This paper reviews the…
Descriptors: Learning Processes, Artificial Intelligence, Technology Uses in Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Wood, Scott; Yao, Erin; Haisfield, Lisa; Lottridge, Susan – ACT, Inc., 2021
For assessment professionals who are also automated scoring (AS) professionals, there is no single set of standards of best practice. This paper reviews the assessment and AS literature to identify key standards of best practice and ethical behavior for AS professionals and codifies those standards in a single resource. Having a unified set of AS…
Descriptors: Standards, Best Practices, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Blundell, Christopher N. – Assessment in Education: Principles, Policy & Practice, 2021
This paper presents a scoping review of, firstly, how teachers use digital technologies for school-based assessment, and secondly, how these assessment-purposed digital technologies are used in teacher- and student-centred pedagogies. It draws on research about the use of assessment-purposed digital technologies in school settings, published from…
Descriptors: Computer Uses in Education, Student Evaluation, Student Centered Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Higgins, Derrick; Heilman, Michael – Educational Measurement: Issues and Practice, 2014
As methods for automated scoring of constructed-response items become more widely adopted in state assessments, and are used in more consequential operational configurations, it is critical that their susceptibility to gaming behavior be investigated and managed. This article provides a review of research relevant to how construct-irrelevant…
Descriptors: Automation, Scoring, Responses, Test Wiseness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balfour, Stephen P. – Research & Practice in Assessment, 2013
Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…
Descriptors: Online Courses, Writing Evaluation, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Kwanghyun – Language Assessment Quarterly, 2014
This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…
Descriptors: Language Tests, Computational Linguistics, Natural Language Processing, Scoring
Peer reviewed Peer reviewed
Martinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science