NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Evaluative22
Journal Articles19
Speeches/Meeting Papers1
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
UK Department for Education, 2024
This report sets out the findings of the technical development work completed as part of the Use Cases for Generative AI in Education project, commissioned by the Department for Education (DfE) in September 2023. It has been published alongside the User Research Report, which sets out the findings from the ongoing user engagement activity…
Descriptors: Artificial Intelligence, Technology Uses in Education, Computer Software, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Nehm, Ross H.; Haertig, Hendrik – Journal of Science Education and Technology, 2012
Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…
Descriptors: Evolution, Undergraduate Students, Interrater Reliability, Computers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yagi, Sane M.; Al-Salman, Saleh – Studies in Second Language Learning and Teaching, 2011
Writing is a complex skill that is hard to teach. Although the written product is what is often evaluated in the context of language teaching, the process of giving thought to linguistic form is fascinating. For almost forty years, language teachers have found it more effective to help learners in the writing process than in the written product;…
Descriptors: Writing Instruction, Teaching Methods, Computer Software, Educational Technology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sabapathy, Elangkeeran A/L; Rahim, Rozlan Abd; Jusoff, Kamaruzaman – English Language Teaching, 2009
The purpose of this article is to examine the extent to which "plagiarismdetect.com," an internet help/tool to detect plagiarism helps academicians tackle the ever-growing problem of plagiarism. Concerned with term papers, essays and most of the time with full-blown research reports, a tool like "plagiarismdetect.com" may…
Descriptors: Plagiarism, Computer Software, Essays, Research Papers (Students)
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia E.; Godshalk, Veronica M. – Educational Technology Research and Development, 2009
Essays are an important measure of complex learning, but pronouns can confound an author's intended meaning for both readers and text analysis software. This descriptive investigation considers the effect of pronouns on a computer-based text analysis approach, "ALA-Reader," which uses students' essays as the data source for deriving individual and…
Descriptors: Sentences, Cognitive Structures, Essays, Content Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, Danielle S.; Crossley, Scott A.; McCarthy, Philip M. – Written Communication, 2010
In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by…
Descriptors: Essays, Undergraduate Students, Educational Quality, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nakayama, Minoru; Yamamoto, Hiroh; Santiago, Rowena – Electronic Journal of e-Learning, 2010
e-Learning has some restrictions on how learning performance is assessed. Online testing is usually in the form of multiple-choice questions, without any essay type of learning assessment. Major reasons for employing multiple-choice tasks in e-learning include ease of implementation and ease of managing learner's responses. To address this…
Descriptors: Electronic Learning, Testing, Essay Tests, Online Courses
Preston, Michael D. – Educational Technology, 2010
An inquiry-based approach to watching videos of children engaged in learning, supported by tools that allow for frequent and close viewing, provides an opportunity for prospective teachers to develop their skills of observation and interpretation before entering the classroom. The in-depth study of videos creates a context in which teachers can…
Descriptors: Preservice Teacher Education, Inquiry, Learning Strategies, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Chodorow, Martin; Gamon, Michael; Tetreault, Joel – Language Testing, 2010
In this paper, we describe and evaluate two state-of-the-art systems for identifying and correcting writing errors involving English articles and prepositions. Criterion[superscript SM], developed by Educational Testing Service, and "ESL Assistant", developed by Microsoft Research, both use machine learning techniques to build models of article…
Descriptors: Grammar, Feedback (Response), Form Classes (Languages), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Grimes, Douglas; Warschauer, Mark – Journal of Technology, Learning, and Assessment, 2010
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Descriptors: Automation, Writing Evaluation, Essays, Artificial Intelligence
Previous Page | Next Page ยป
Pages: 1  |  2