Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Grantee Submission | 6 |
Author
Aaron D. Likens | 1 |
Aaron Haim | 1 |
Beard, Gaysha | 1 |
Daniel F. McCaffrey | 1 |
Danielle S. McNamara | 1 |
Eamon Worden | 1 |
Huang, Yue | 1 |
Jessica Nastal | 1 |
Jill Burstein | 1 |
Kathryn S. McCarthy | 1 |
Laura K. Allen | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Speeches/Meeting Papers | 2 |
Journal Articles | 1 |
Tests/Questionnaires | 1 |
Education Level
Adult Education | 1 |
Elementary Education | 1 |
High Schools | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Two Year Colleges | 1 |
Audience
Location
Illinois | 1 |
Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
Philip I. Pavlik; Luke G. Eglington – Grantee Submission, 2023
This paper presents a tool for creating student models in logistic regression. Creating student models has typically been done by expert selection of the appropriate terms, beginning with models as simple as IRT or AFM but more recently with highly complex models like BestLR. While alternative methods exist to select the appropriate predictors for…
Descriptors: Students, Models, Regression (Statistics), Alternative Assessment
Aaron Haim; Eamon Worden; Neil T. Heffernan – Grantee Submission, 2024
Since GPT-4's release it has shown novel abilities in a variety of domains. This paper explores the use of LLM-generated explanations as on-demand assistance for problems within the ASSISTments platform. In particular, we are studying whether GPT-generated explanations are better than nothing on problems that have no supports and whether…
Descriptors: Artificial Intelligence, Learning Management Systems, Computer Software, Intelligent Tutoring Systems
Monique H. Harrison; Philip A. Hernandez – Grantee Submission, 2022
The interview experience is only one component of the process of interviewing -- software programmes can coordinate the pre-interview steps and begin a digitally-mediated relationship with participants long before the actual interview commences. This essay provides examples of how researchers can maximise their time and energy by digitally…
Descriptors: College Freshmen, Interviews, Computer Uses in Education, Computer Software
Kathryn S. McCarthy; Rod D. Roscoe; Laura K. Allen; Aaron D. Likens; Danielle S. McNamara – Grantee Submission, 2022
The benefits of writing strategy feedback are well established. This study examined the extent to which adding spelling and grammar checkers support writing and revision in comparison to providing writing strategy feedback alone. High school students (n = 119) wrote and revised six persuasive essays in Writing Pal, an automated writing evaluation…
Descriptors: High School Students, Automation, Writing Evaluation, Computer Software
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – Grantee Submission, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software