NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew T. McCrudden; Linh Huynh; Bailing Lyu; Jonna M. Kulikowich; Danielle S. McNamara – Grantee Submission, 2024
Readers build a mental representation of text during reading. The coherence building processes readers use to build a mental representation during reading is key to comprehension. We examined the effects of self- explanation on coherence building processes as undergraduates (n =51) read five complementary texts about natural selection and…
Descriptors: Reading Processes, Reading Comprehension, Undergraduate Students, Evolution
Peer reviewed Peer reviewed
Direct linkDirect link
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
Four versions of science and history texts were tailored to diverse hypothetical reader profiles (high and low reading skills and domain knowledge), generated by four Large Language Models (i.e., Claude, Llama, ChatGPT, and Gemini). The Natural Language Processing (NLP) technique was applied to examine variations in Large Language Model (LLM) text…
Descriptors: Artificial Intelligence, Natural Language Processing, Textbook Evaluation, Individualized Instruction
Jia Tracy Shen; Michiharu Yamashita; Ethan Prihar; Neil Heffernan; Xintao Wu; Sean McGrew; Dongwon Lee – Grantee Submission, 2021
Educational content labeled with proper knowledge components (KCs) are particularly useful to teachers or content organizers. However, manually labeling educational content is labor intensive and error-prone. To address this challenge, prior research proposed machine learning based solutions to auto-label educational content with limited success.…
Descriptors: Mathematics Education, Knowledge Level, Video Technology, Educational Technology