Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 15 |
Since 2006 (last 20 years) | 18 |
Descriptor
Source
Grantee Submission | 14 |
Assessment in Education:… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Technology, Instruction,… | 1 |
Author
Danielle S. McNamara | 18 |
Laura K. Allen | 9 |
Mihai Dascalu | 7 |
Rod D. Roscoe | 7 |
Stefan Ruseti | 5 |
Kathryn S. McCarthy | 4 |
Scott A. Crossley | 3 |
Andreea Dutulescu | 2 |
Erica L. Snow | 2 |
Ionut Paraschiv | 2 |
Panayiota Kendeou | 2 |
More ▼ |
Publication Type
Reports - Research | 15 |
Journal Articles | 8 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 3 |
Tests/Questionnaires | 2 |
Education Level
High Schools | 6 |
Secondary Education | 6 |
Elementary Education | 2 |
Grade 11 | 1 |
Grade 9 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 3 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Danielle S. McNamara; Panayiota Kendeou – Grantee Submission, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5 th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Danielle S. McNamara; Panayiota Kendeou – Assessment in Education: Principles, Policy & Practice, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Grantee Submission, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Journal of Educational Computing Research, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Danielle S. McNamara; Renu Balyan; Kathryn S. McCarthy; Stefan Trausan-Matu – Grantee Submission, 2018
Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a…
Descriptors: Documentation, Artificial Intelligence, Educational Technology, Writing (Composition)
Kathryn S. McCarthy; Rod D. Roscoe; Laura K. Allen; Aaron D. Likens; Danielle S. McNamara – Grantee Submission, 2022
The benefits of writing strategy feedback are well established. This study examined the extent to which adding spelling and grammar checkers support writing and revision in comparison to providing writing strategy feedback alone. High school students (n = 119) wrote and revised six persuasive essays in Writing Pal, an automated writing evaluation…
Descriptors: High School Students, Automation, Writing Evaluation, Computer Software
Danielle S. McNamara; Laura K. Allen; Scott A. Crossley; Mihai Dascalu; Cecile A. Perret – Grantee Submission, 2017
Language is of central importance to the field of education because it is a conduit for communicating and understanding information. Therefore, researchers in the field of learning analytics can benefit from methods developed to analyze language both accurately and efficiently. Natural language processing (NLP) techniques can provide such an…
Descriptors: Natural Language Processing, Learning Analytics, Educational Technology, Automation
Scott A. Crossley; Danielle S. McNamara – Grantee Submission, 2016
The purpose of this handbook is to provide actionable information to educators, administrators, and researchers about current, available research-based educational technologies that provide adaptive (personalized) instruction to students on literacy, including reading comprehension and writing. This handbook is comprised of chapters by leading…
Descriptors: Educational Technology, Literacy, Reading Comprehension, Writing Skills
Previous Page | Next Page ยป
Pages: 1 | 2