NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Showing 1 to 15 of 698 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Yujia Liu; Emily K. Penner; Sabrina Solanki; Xuehan Zhou – Journal of Education Human Resources, 2025
Identifying high-quality educators at the point of hire can reduce future recruitment costs and minimize the impact of attrition on school organizations and student learning. One low-cost way to screen applicants and learn about their beliefs, values, and pedagogy is through their short-essay writing samples. However, there is limited research…
Descriptors: Teacher Selection, Screening Tests, Essays, Job Applicants
Peer reviewed Peer reviewed
Direct linkDirect link
Scott A. Crossley; Minkyung Kim; Quian Wan; Laura K. Allen; Rurik Tywoniw; Danielle S. McNamara – Grantee Submission, 2025
This study examines the potential to use non-expert, crowd-sourced raters to score essays by comparing expert raters' and crowd-sourced raters' assessments of writing quality. Expert raters and crowd-sourced raters scored 400 essays using a standardised holistic rubric and comparative judgement (pairwise ratings) scoring techniques, respectively.…
Descriptors: Writing Evaluation, Essays, Novices, Knowledge Level
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Karima Bouziane; Abdelmounim Bouziane – Discover Education, 2024
The evaluation of student essay corrections has become a focal point in understanding the evolving role of Artificial Intelligence (AI) in education. This study aims to assess the accuracy, efficiency, and cost-effectiveness of ChatGPT's essay correction compared to human correction, with a primary focus on identifying and rectifying grammatical…
Descriptors: Artificial Intelligence, Essays, Writing Skills, Grammar
Peer reviewed Peer reviewed
Direct linkDirect link
Backman, Ylva; Reznitskaya, Alina; Gardelli, Viktor; Wilkinson, Ian A. G. – Written Communication, 2023
Current approaches used in educational research and practice to evaluate the quality of written arguments often rely on structural analysis. In such assessments, credit is awarded for the presence of structural elements of an argument, such as claims, evidence, and rebuttals. In this article, we discuss limitations of such approaches, including…
Descriptors: Writing Evaluation, Models, Persuasive Discourse, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ruth Li – Written Communication, 2025
Students are expected to interpret the complexities and nuances of literary texts yet might struggle with interpreting texts in ways that are valued in literary studies. Examining students' language choices can support instructors and students with developing concrete, explicit understandings of the ways language creates meanings in discourse.…
Descriptors: Linguistics, Writing (Composition), Literature Appreciation, Metalinguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Galina Shulgina; Jamie Costley; Irina Shcheglova; Han Zhang; Natalya Sedova – Smart Learning Environments, 2024
While peer-editing is considered an important part of developing students' academic writing, questions remain about how different types of peer-editing affect subsequent student performance. The present study looked at a group of university students (N = 149) engaged in peer editing of one another's essays in an online security studies course. The…
Descriptors: Peer Evaluation, Writing Evaluation, Editing, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
AL Harrasi, Kothar Talib Sulaiman – Language Testing in Asia, 2023
Drawing upon research on the ways texts work as communication across different disciplines, this study investigated teacher and student feedback practices on three different patterns of writing: comparison-contrast essays, opinion essays, and cause-and-effect essays. The data were collected through three qualitative techniques: interviews, class…
Descriptors: Writing Evaluation, Feedback (Response), Rhetoric, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Siraprapa Kotmungkun; Wichuta Chompurach; Piriya Thaksanan – English Language Teaching Educational Journal, 2024
This study explores the writing quality of two AI chatbots, OpenAI ChatGPT and Google Gemini. The research assesses the quality of the generated texts based on five essay models using the T.E.R.A. software, focusing on ease of understanding, readability, and reading levels using the Flesch-Kincaid formula. Thirty essays were generated, 15 from…
Descriptors: Plagiarism, Artificial Intelligence, Computer Software, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D. – Journal of Educational Measurement, 2022
One of the challenges of discussing validity arguments for machine scoring of essays centers on the absence of a commonly held definition and theory of good writing. At best, the algorithms attempt to measure select attributes of writing and calibrate them against human ratings with the goal of accurate prediction of scores for new essays.…
Descriptors: Scoring, Essays, Validity, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Maira Klyshbekova; Pamela Abbott – Electronic Journal of e-Learning, 2024
There is a current debate about the extent to which ChatGPT, a natural language AI chatbot, can disrupt processes in higher education settings. The chatbot is capable of not only answering queries in a human-like way within seconds but can also provide long tracts of texts which can be in the form of essays, emails, and coding. In this study, in…
Descriptors: Artificial Intelligence, Higher Education, Technology Uses in Education, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Christian Tarchi; Lidia Casado-Ledesma; Giulia Sanna; Margherita Conti – European Journal of Psychology of Education, 2024
The demands of learning in the twenty-first century require being skilled in the use and comprehension of multiple documents. Some individual factors such as the metacognitive skill of theory of mind (ToM) are related to this ability. This study investigated the relationship between university students' ability to comprehend multiple documents,…
Descriptors: Theory of Mind, Protocol Analysis, Predictor Variables, Correlation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  47