Publication Date
In 2025 | 2 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 16 |
Descriptor
Source
Author
McNamara, Danielle S. | 5 |
Balyan, Renu | 3 |
Crossley, Scott A. | 3 |
Allen, Laura K. | 2 |
McCarthy, Kathryn S. | 2 |
Adam B. Lockwood | 1 |
Alexander Y. Payumo | 1 |
Allen, Laura | 1 |
Banawan, Michelle | 1 |
Barbosa, Denilson | 1 |
Bulut, Okan | 1 |
More ▼ |
Publication Type
Reports - Research | 13 |
Journal Articles | 10 |
Speeches/Meeting Papers | 5 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Arizona | 1 |
Japan (Tokyo) | 1 |
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
Adam B. Lockwood; Joshua Castleberry – Contemporary School Psychology, 2025
Technological Advances in Artificial Intelligence (AI) have Brought forth the Potential for Models to Assist in Academic Writing. However, Concerns Regarding the Accuracy, Reliability, and Impact of AI in Academic Writing have been Raised. This Study Examined the Capabilities of GPT-4, a state-of-the-art AI Language Model, in Writing an American…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Writing (Composition)
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
Wan, Qian; Crossley, Scott; Banawan, Michelle; Balyan, Renu; Tian, Yu; McNamara, Danielle; Allen, Laura – International Educational Data Mining Society, 2021
The current study explores the ability to predict argumentative claims in structurally-annotated student essays to gain insights into the role of argumentation structure in the quality of persuasive writing. Our annotation scheme specified six types of argumentative components based on the well-established Toulmin's model of argumentation. We…
Descriptors: Essays, Persuasive Discourse, Automation, Identification
Shalva Kikalishvili – Interactive Learning Environments, 2024
Presented study seeks to examine the potential applications of the OpenAI language model, GPT-3, within the realm of education. Specifically, the inquiry focuses on the feasibility of utilizing GPT-3 to generate essays based on customized prompts. To this end, the experimentation involved providing GPT-3 with tailored prompts derived from diverse…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Opportunities
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Ursula Holzmann; Sulekha Anand; Alexander Y. Payumo – Advances in Physiology Education, 2025
Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted, as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Thinking Skills
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Educational Data Mining Society, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Journal of Educational Data Mining, 2016
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…
Descriptors: Essays, Scoring, Writing Evaluation, Natural Language Processing
Tono, Yukio; Satake, Yoshiho; Miura, Aika – ReCALL, 2014
This study reports on the results of classroom research investigating the effects of corpus use in the process of revising compositions in English as a foreign language. Our primary aim was to investigate the relationship between the information extracted from corpus data and how that information actually helped in revising different types of…
Descriptors: Computational Linguistics, Feedback (Response), Revision (Written Composition), English (Second Language)
Previous Page | Next Page ยป
Pages: 1 | 2