NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hikmet Sevgin – International Journal of Assessment Tools in Education, 2023
This study aims to conduct a comparative study of Bagging and Boosting algorithms among ensemble methods and to compare the classification performance of TreeNet and Random Forest methods using these algorithms on the data extracted from ABIDE application in education. The main factor in choosing them for analyses is that they are Ensemble methods…
Descriptors: Algorithms, Mathematics Education, Classification, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Ayfer Sayin; Mark Gierl – Educational Measurement: Issues and Practice, 2024
The purpose of this study is to introduce and evaluate a method for generating reading comprehension items using template-based automatic item generation. To begin, we describe a new model for generating reading comprehension items called the text analysis cognitive model assessing inferential skills across different reading passages. Next, the…
Descriptors: Algorithms, Reading Comprehension, Item Analysis, Man Machine Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
I Made Suarsana; Al Jupri; Didi Suryadi; Elah Nurlaelah; I Gusti Nyoman Yudi Hartawan – Mathematics Teaching Research Journal, 2025
Mathematical and computational thinking (CT) are closely interrelated, yet CT integration into mathematics instruction remains limited. This study aims to analyze the effectiveness of the teaching process for straight-line equations in enhancing students' CT skills and to propose improvements based on the findings. A qualitative case study…
Descriptors: Mathematics Instruction, Teaching Methods, Learning Processes, Instructional Design
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms