NotesFAQContact Us
Collection
Advanced
Search Tips
Location
California1
China1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zesch, Torsten; Horbach, Andrea; Zehner, Fabian – Educational Measurement: Issues and Practice, 2023
In this article, we systematize the factors influencing performance and feasibility of automatic content scoring methods for short text responses. We argue that performance (i.e., how well an automatic system agrees with human judgments) mainly depends on the linguistic variance seen in the responses and that this variance is indirectly influenced…
Descriptors: Influences, Academic Achievement, Feasibility Studies, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaoming Xi – Language Assessment Quarterly, 2023
Following the burgeoning growth of artificial intelligence (AI) and machine learning (ML) applications in language assessment in recent years, the meteoric rise of ChatGPT and its sweeping applications in almost every sector have left us in awe, scrambling to catch up by developing theories and best practices. This special issue features studies…
Descriptors: Artificial Intelligence, Theory Practice Relationship, Language Tests, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Tsai, Cheng-Ting; Wu, Ja-Ling; Lin, Yu-Tzu; Yeh, Martin K.-C. – Educational Technology & Society, 2022
With the rapid increase of online learning and online degree programs, the need for a secure and fair scoring mechanisms in online learning becomes urgent. In this research, a secure scoring mechanism was designed and developed based on blockchain technology to build transparent and fair interactions among students and teachers. The proposed…
Descriptors: Electronic Learning, Online Courses, Computer Security, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Agarwal, Pakhi; Liao, Jian; Hooper, Simon; Sperling, Rayne – Distance Learning, 2021
Progress monitoring is used to assess a student's performance during the early stages of literacy development. Computerized progress monitoring systems are capable of scoring some progress monitoring measures automatically. However, other measures, such as those involving writing or sign language, are typically scored manually, which is…
Descriptors: Progress Monitoring, Computer Uses in Education, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Carberry, Tom P.; Lukeman, Philip S.; Covell, Dustin J. – Journal of Chemical Education, 2019
We present here an extension of Morrison's and Ruder's "Sequence-Response Questions" (SRQs) that allows for more nuance in the assessment of student responses to these questions. We have implemented grading software (which we call ANGST, "Automated Nuanced Grading & Statistics Tool") in a Microsoft Excel sheet that can take…
Descriptors: Science Instruction, Computer Software, Grading, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2017
Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students' writing…
Descriptors: Automation, Scoring, Evidence, Scoring Rubrics
Murphy, Robert F. – RAND Corporation, 2019
Recent applications of artificial intelligence (AI) have been successful in performing complex tasks in health care, financial markets, manufacturing, and transportation logistics, but the influence of AI applications in the education sphere has been limited. However, that may be changing. In this paper, the author discusses several ways that AI…
Descriptors: Elementary Secondary Education, Artificial Intelligence, Teaching Methods, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Michael; Scully, Darina; Karakolidis, Anastasios; Pitsia, Vasiliki – European Journal of Education, 2018
The role of digital technology in assessment has received a great deal of attention in recent years. Naturally, technology offers many practical benefits, such as increased efficiency with regard to the design, implementation and scoring of existing assessments. More importantly, it also has the potential to have profound, transformative effects…
Descriptors: Computer Assisted Testing, Educational Technology, Technology Uses in Education, Evaluation Methods
Office of Educational Technology, US Department of Education, 2023
The U.S. Department of Education (Department) is committed to supporting the use of technology to improve teaching and learning and to support innovation throughout educational systems. This report addresses the clear need for sharing knowledge and developing policies for "Artificial Intelligence," a rapidly advancing class of…
Descriptors: Artificial Intelligence, Educational Technology, Technology Uses in Education, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Previous Page | Next Page »
Pages: 1  |  2