NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 121 to 135 of 7,367 results Save | Export
Tara-Lynn Scheffel; Lori McKee – Sage Research Methods Cases, 2025
This case study explores the qualitative digital methods used within a collaborative self-study of teacher education practice (S-STEP). It brings together two experienced qualitative researchers working in different institutions but with an already established rapport as critical friends gained from a long-standing relationship as colleagues that…
Descriptors: Literacy Education, Friendship, Teacher Education, Collegiality
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2025
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
PDF on ERIC Download full text
German Cuaya-Simbro; Serguei Drago Domínguez Ruíz – International Journal of Assessment Tools in Education, 2025
This study introduces a novel Generative Artificial Intelligence (GAI) platform designed to streamline the peer review process. By analyzing a case study of 10 scientific articles, we demonstrate that GAI effectively evaluates article quality and pinpoints specific areas requiring improvement. Our platform achieves an average similarity of 63.6%…
Descriptors: Peer Evaluation, Artificial Intelligence, Scientific Research, Journal Articles
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Felix Weber; Hendrik Hubbertz – International Association for Development of the Information Society, 2025
Artificial Intelligence (AI) technologies are increasingly being integrated into educational environments, especially in areas such as student feedback and assessment. Among these applications, automated grading tools have garnered both interest and controversy for their potential to streamline evaluation processes while raising questions about…
Descriptors: Grading, Artificial Intelligence, Automation, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Natalie Lander; Tao Zhou; Anna Timperio; Lisa M. Barnett; Yuxin Zhang – Journal of Motor Learning and Development, 2025
Purpose: This study developed and evaluated a video-based machine learning model to automate motor competence assessment and created a user-friendly web platform for teachers and coaches. Methods: A total of 1,063 children (mean age: 7.8 years) performed seven motor skills, recorded on video and assessed by experts using the Test of Gross Motor…
Descriptors: Artificial Intelligence, Computer Uses in Education, Motor Development, Children
Priti Oli – ProQuest LLC, 2024
This dissertation focuses on strategies and techniques to enhance code comprehension skills among students enrolled in introductory computer science courses (CS1 and CS2). We propose a novel tutoring system, "DeepCodeTutor," designed to improve the code comprehension abilities of novices. DeepCodeTutor employs scaffolded self-explanation…
Descriptors: Reading Comprehension, Tutoring, Scaffolding (Teaching Technique), Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Alejandra J. Magana; Syed Tanzim Mubarrat; Dominic Kao; Bedrich Benes – IEEE Transactions on Learning Technologies, 2024
Fostering productive engagement within teams has been found to improve student learning outcomes. Consequently, characterizing productive and unproductive time during teamwork sessions is a critical preliminary step to increase engagement in teamwork meetings. However, research from the cognitive sciences has mainly focused on characterizing…
Descriptors: Artificial Intelligence, Technology Uses in Education, Teamwork, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Leonora Kaldaras; Kevin Haudek; Joseph Krajcik – International Journal of STEM Education, 2024
We discuss transforming STEM education using three aspects: learning progressions (LPs), constructed response performance assessments, and artificial intelligence (AI). Using LPs to inform instruction, curriculum, and assessment design helps foster students' ability to apply content and practices to explain phenomena, which reflects deeper science…
Descriptors: Artificial Intelligence, Computer Assisted Instruction, STEM Education, Learning Trajectories
Peer reviewed Peer reviewed
Direct linkDirect link
Yajun Guo; Shuai Li; XinDi Zhang; Yiyang Fu; Yiming Yuan; Yanquan Liu – College & Research Libraries, 2024
The purpose of this study is to learn more about virtual reality (VR) and augmented reality (AR) practices at the United States' top one hundred university libraries, as well as how they are engaging with the metaverse. We conducted qualitative and descriptive analysis on the websites of the top one hundred university libraries in the United…
Descriptors: Research Libraries, Academic Libraries, Artificial Intelligence, Metadata
Peer reviewed Peer reviewed
Direct linkDirect link
Fatima Abu Deeb; Timothy Hickey – Computer Science Education, 2024
Background and Context: Auto-graders are praised by novice students learning to program, as they provide them with automatic feedback about their problem-solving process. However, some students often make random changes when they have errors in their code, without engaging in deliberate thinking about the cause of the error. Objective: To…
Descriptors: Reflection, Automation, Grading, Novices
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mark Monnin; Lori L. Sussman – Journal of Cybersecurity Education, Research and Practice, 2024
Data transfer between isolated clusters is imperative for cybersecurity education, research, and testing. Such techniques facilitate hands-on cybersecurity learning in isolated clusters, allow cybersecurity students to practice with various hacking tools, and develop professional cybersecurity technical skills. Educators often use these remote…
Descriptors: Computer Science Education, Computer Security, Computer Software, Data
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  ...  |  492