NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 140 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shireen Jamal Mohammed; Maryam Waleed Khalid – Language Testing in Asia, 2025
While over the last couple of years, there has been a growing investigation into the use of Artificial Intelligence (AI) within language learning, very little has been written regarding how such AI feedback influences the complex experiences of EFL learners in terms of motivation, foreign language peace of mind (FLPoM), trait emotional…
Descriptors: Artificial Intelligence, Feedback (Response), Computer Software, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan Tian; Xi Yang; Suhail A. Doi; Luis Furuya-Kanamori; Lifeng Lin; Joey S. W. Kwong; Chang Xu – Research Synthesis Methods, 2024
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two…
Descriptors: Risk, Randomized Controlled Trials, Classification, Robotics
Peer reviewed Peer reviewed
Direct linkDirect link
Qusai Khraisha; Sophie Put; Johanna Kappenberg; Azza Warraitch; Kristin Hadfield – Research Synthesis Methods, 2024
Systematic reviews are vital for guiding practice, research and policy, although they are often slow and labour-intensive. Large language models (LLMs) could speed up and automate systematic reviews, but their performance in such tasks has yet to be comprehensively evaluated against humans, and no study has tested Generative Pre-Trained…
Descriptors: Peer Evaluation, Research Reports, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Siraprapa Kotmungkun; Wichuta Chompurach; Piriya Thaksanan – English Language Teaching Educational Journal, 2024
This study explores the writing quality of two AI chatbots, OpenAI ChatGPT and Google Gemini. The research assesses the quality of the generated texts based on five essay models using the T.E.R.A. software, focusing on ease of understanding, readability, and reading levels using the Flesch-Kincaid formula. Thirty essays were generated, 15 from…
Descriptors: Plagiarism, Artificial Intelligence, Computer Software, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Liuying Gong; Jingyuan Chen; Fei Wu – IEEE Transactions on Learning Technologies, 2025
The capabilities of large language models (LLMs) in language comprehension, conversational interaction, and content generation have led to their widespread adoption across various educational stages and contexts. Given the fundamental role of education, concerns are rising about whether LLMs can serve as competent teachers. To address the…
Descriptors: Artificial Intelligence, Computer Software, Computational Linguistics, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gustavo Simas da Silva; Vânia Ribas Ulbricht – International Association for Development of the Information Society, 2023
ChatGPT and Bard, two chatbots powered by Large Language Models (LLMs), are propelling the educational sector towards a new era of instructional innovation. Within this educational paradigm, the present investigation conducts a comparative analysis of these groundbreaking chatbots, scrutinizing their distinct operational characteristics and…
Descriptors: Comparative Analysis, Teaching Methods, Computer Software, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Hyemin Yoon; HyunJin Kim; Sangjin Kim – Measurement: Interdisciplinary Research and Perspectives, 2024
We have maintained the customer grade system that is being implemented to customers with excellent performance through customer segmentation for years. Currently, financial institutions that operate the customer grade system provide similar services based on the score calculation criteria, but the score calculation criteria vary from the financial…
Descriptors: Classification, Artificial Intelligence, Prediction, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Rebeckah K. Fussell; Megan Flynn; Anil Damle; Michael F. J. Fox; N. G. Holmes – Physical Review Physics Education Research, 2025
Recent advancements in large language models (LLMs) hold significant promise for improving physics education research that uses machine learning. In this study, we compare the application of various models for conducting a large-scale analysis of written text grounded in a physics education research classification problem: identifying skills in…
Descriptors: Physics, Computational Linguistics, Classification, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Xieling Chen; Haoran Xie; Di Zou; Lingling Xu; Fu Lee Wang – Educational Technology & Society, 2025
In massive open online course (MOOC) environments, computer-based analysis of course reviews enables instructors and course designers to develop intervention strategies and improve instruction to support learners' learning. This study aimed to automatically and effectively identify learners' concerned topics within their written reviews. First, we…
Descriptors: Classification, MOOCs, Teaching Skills, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Muhammad Amin Nadim; Raffaele Di Fuccio – European Journal of Education, 2025
Higher education has witnessed remarkable technological advancements; however, the rapid rise of generative artificial intelligence (Gen AI) presents substantial challenges for teaching and research. This growing reliance has expanded educators' roles, underscoring the need for ethical and selective AI integration while preparing students and…
Descriptors: Artificial Intelligence, Teaching Methods, Learning Processes, Ethics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xavier Ochoa; Xiaomeng Huang; Yuli Shao – Journal of Learning Analytics, 2025
Generative AI (GenAI) has the potential to revolutionize the analysis of educational data, significantly impacting learning analytics (LA). This study explores the capability of non-experts, including administrators, instructors, and students, to effectively use GenAI for descriptive LA tasks without requiring specialized knowledge in data…
Descriptors: Learning Analytics, Artificial Intelligence, Computer Software, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Daly; Emmanuelle Deglaire – Innovations in Education and Teaching International, 2025
AI-enabled assessment of student papers has the potential to provide both summative and formative feedback and reduce the time spent on grading. Using auto-ethnography, this study compares AI-enabled and human assessment of business student examination papers in a law module based on previously established rubrics. Examination papers were…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Philip Newton; Maira Xiromeriti – Assessment & Evaluation in Higher Education, 2024
Media coverage suggests that ChatGPT can pass examinations based on multiple choice questions (MCQs), including those used to qualify doctors, lawyers, scientists etc. This poses a potential risk to the integrity of those examinations. We reviewed current research evidence regarding the performance of ChatGPT on MCQ-based examinations in higher…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Integrity, Computer Software
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10