NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tri Sedya Febrianti; Siti Fatimah; Yuni Fitriyah; Hanifah Nurhayati – International Journal of Education in Mathematics, Science and Technology, 2024
Assessing students' understanding of circle-related material through subjective tests is effective, though grading these tests can be challenging and often requires technological support. ChatGPT has shown promise in providing reliable and objective evaluations. Many teachers in Indonesia, however, continue to face difficulties integrating…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Scoring, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Georgios Zacharis; Stamatios Papadakis – Educational Process: International Journal, 2025
Background/purpose: Generative artificial intelligence (GenAI) is often promoted as a transformative tool for assessment, yet evidence of its validity compared to human raters remains limited. This study examined whether an AI-based rater could be used interchangeably with trained faculty in scoring complex coursework. Materials/methods:…
Descriptors: Artificial Intelligence, Technology Uses in Education, Computer Assisted Testing, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon J. Yik; David G. Schreurs; Jeffrey R. Raker – Journal of Chemical Education, 2023
Acid-base chemistry, and in particular the Lewis acid-base model, is foundational to understanding mechanistic ideas. This is due to the similarity in language chemists use to describe Lewis acid-base reactions and nucleophile-electrophile interactions. The development of artificial intelligence and machine learning technologies has led to the…
Descriptors: Educational Technology, Formative Evaluation, Molecular Structure, Models
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Panaite, Marilena; Ruseti, Stefan; Dascalu, Mihai; Balyan, Renu; McNamara, Danielle S.; Trausan-Matu, Stefan – Grantee Submission, 2019
Intelligence Tutoring Systems (ITSs) focus on promoting knowledge acquisition, while providing relevant feedback during students' practice. Self-explanation practice is an effective method used to help students understand complex texts by leveraging comprehension. Our aim is to introduce a deep learning neural model for automatically scoring…
Descriptors: Computer Assisted Testing, Scoring, Intelligent Tutoring Systems, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olivera-Aguilar, Margarita; Lee, Hee-Sun; Pallant, Amy; Belur, Vinetha; Mulholland, Matthew; Liu, Ou Lydia – ETS Research Report Series, 2022
This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students' explanations of scientific claims and attributions of uncertainty to those…
Descriptors: Computer Assisted Testing, Formative Evaluation, Automation, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Kochem, Tim; Beck, Jeanne; Goodale, Erik – CALICO Journal, 2022
Technology has paved the way for new modalities in language learning, teaching, and assessment. However, there is still a great deal of work to be done to develop such tools for oral communication, specifically tools that address suprasegmental features in pronunciation instruction. Therefore, this critical literature review examines how…
Descriptors: Computer Software, Teaching Methods, Audio Equipment, Computer Assisted Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Andrew; Wilson, Joshua – Educational Technology Research and Development, 2021
Automated Writing Evaluation (AWE) provides automatic writing feedback and scoring to support student writing and revising. The purpose of the present study was to analyze a statewide implementation of an AWE software (n = 114,582) in grades 4-11. The goals of the study were to evaluate: (1) to what extent AWE features were used; (2) if equity and…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Shixuan; Gu, Huimin; Yang, Bo – British Journal of Educational Technology, 2020
Traditional educational giants and natural language processing companies have launched several artificial intelligence (AI)-enabled digital learning applications to facilitate language learning. One typical application of AI in digital language education is the automatic scoring application that provides feedback on pronunciation repeat outcomes.…
Descriptors: Affordances, Artificial Intelligence, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Previous Page | Next Page »
Pages: 1  |  2  |  3