NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Soomaiya Hamid; Narmeen Zakaria Bawany – Interactive Learning Environments, 2024
E-learning is the process of sharing knowledge out of the traditional classrooms through different online tools using internet. The availability and use of these tools are not easy for every student. Many institutions gather e-learning feedback to know the problems of students to improve their systems. In e-learning systems, typically a high…
Descriptors: Feedback (Response), Electronic Learning, Automation, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Marrone, Rebecca; Cropley, David H.; Wang, Z. – Creativity Research Journal, 2023
Creativity is now accepted as a core 21st-century competency and is increasingly an explicit part of school curricula around the world. Therefore, the ability to assess creativity for both formative and summative purposes is vital. However, the "fitness-for-purpose" of creativity tests has recently come under scrutiny. Current creativity…
Descriptors: Automation, Evaluation Methods, Creative Thinking, Mathematics Education
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hunkoog Jho; Minsu Ha – Journal of Baltic Science Education, 2024
This study aimed at examining the performance of generative artificial intelligence to extract argumentation elements from text. Thus, the researchers developed a web-based framework to provide automated assessment and feedback relying on a large language model, ChatGPT. The results produced by ChatGPT were compared to human experts across…
Descriptors: Feedback (Response), Artificial Intelligence, Persuasive Discourse, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Seyedahmad Rahimi; Justice T. Walker; Lin Lin-Lipsmeyer; Jinnie Shin – Creativity Research Journal, 2024
Digital sandbox games such as "Minecraft" can be used to assess and support creativity. Doing so, however, requires an understanding of what is deemed creative in this game context. One approach is to understand how Minecrafters describe creativity in their communities, and how much those descriptions overlap with the established…
Descriptors: Creativity, Video Games, Computer Games, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Garman, Andrew N.; Erwin, Taylor S.; Garman, Tyler R.; Kim, Dae Hyun – Journal of Competency-Based Education, 2021
Background: Competency models provide useful frameworks for organizing learning and assessment programs, but their construction is both time intensive and subject to perceptual biases. Some aspects of model development may be particularly well-suited to automation, specifically natural language processing (NLP), which could also help make them…
Descriptors: Natural Language Processing, Automation, Guidelines, Leadership Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jia, Qinjin; Cui, Jialin; Xiao, Yunkai; Liu, Chengyuan; Rashid, Parvez; Gehringer, Edward – International Educational Data Mining Society, 2021
Peer assessment has been widely applied across diverse academic fields over the last few decades, and has demonstrated its effectiveness. However, the advantages of peer assessment can only be achieved with high-quality peer reviews. Previous studies have found that high-quality review comments usually comprise several features (e.g., contain…
Descriptors: Peer Evaluation, Models, Artificial Intelligence, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Mike Richards; Kevin Waugh; Mark A Slaymaker; Marian Petre; John Woodthorpe; Daniel Gooch – ACM Transactions on Computing Education, 2024
Cheating has been a long-standing issue in university assessments. However, the release of ChatGPT and other free-to-use generative AI tools has provided a new and distinct method for cheating. Students can run many assessment questions through the tool and generate a superficially compelling answer, which may or may not be accurate. We ran a…
Descriptors: Computer Science Education, Artificial Intelligence, Cheating, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Ziwei Zhou – ProQuest LLC, 2020
In light of the ever-increasing capability of computer technology and advancement in speech and natural language processing techniques, automated speech scoring of constructed responses is gaining popularity in many high-stakes assessment and low-stakes educational settings. Automated scoring is a highly interdisciplinary and complex subject, and…
Descriptors: Certification, Speech Skills, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
Ramachandran, Lakshmi; Gehringer, Edward F.; Yadav, Ravi K. – International Journal of Artificial Intelligence in Education, 2017
A "review" is textual feedback provided by a reviewer to the author of a submitted version. Peer reviews are used in academic publishing and in education to assess student work. While reviews are important to e-commerce sites like Amazon and e-bay, which use them to assess the quality of products and services, our work focuses on…
Descriptors: Natural Language Processing, Peer Evaluation, Educational Quality, Meta Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2