NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ayfer Sayin; Mark Gierl – Educational Measurement: Issues and Practice, 2024
The purpose of this study is to introduce and evaluate a method for generating reading comprehension items using template-based automatic item generation. To begin, we describe a new model for generating reading comprehension items called the text analysis cognitive model assessing inferential skills across different reading passages. Next, the…
Descriptors: Algorithms, Reading Comprehension, Item Analysis, Man Machine Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jonathan K. Foster; Peter Youngs; Rachel van Aswegen; Samarth Singh; Ginger S. Watson; Scott T. Acton – Journal of Learning Analytics, 2024
Despite a tremendous increase in the use of video for conducting research in classrooms as well as preparing and evaluating teachers, there remain notable challenges to using classroom videos at scale, including time and financial costs. Recent advances in artificial intelligence could make the process of analyzing, scoring, and cataloguing videos…
Descriptors: Learning Analytics, Automation, Classification, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Slavko Žitnik; Glenn Gordon Smith – Interactive Learning Environments, 2024
In the recent, and ongoing, COVID-19 pandemic, remote or online K-12 schooling became the norm. Even if the pandemic tails off somewhat, remote K-12 schooling will likely remain more frequent than it was before the pandemic. A mainstay technique of online learning, at least at the college and graduate level, has been the online discussion. Since…
Descriptors: Grade 4, Elementary School Students, Discussion, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms