Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 9 |
Descriptor
Computer Uses in Education | 13 |
Writing Evaluation | 13 |
Computer Software | 11 |
Feedback (Response) | 5 |
Essays | 4 |
Higher Education | 4 |
Second Language Learning | 4 |
Writing Instruction | 4 |
Automation | 3 |
Comparative Analysis | 3 |
Computer Software Evaluation | 3 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 12 |
Reports - Research | 6 |
Reports - Descriptive | 3 |
Opinion Papers | 2 |
Reports - Evaluative | 2 |
Guides - Classroom - Teacher | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Al-Inbari, Fatima Abdullah Yahya; Al-Wasy, Baleigh Qassim Mohammed – Education and Information Technologies, 2023
Automated Writing Evaluation (AWE) is one of the machine techniques used for assessing learners' writing. Recently, this technique has been widely implemented for improving learners' editing strategies. Several studies have been conducted to compare self-editing with peer editing. However, only a few studies have compared automated peer and…
Descriptors: English Language Learners, Automation, Writing Evaluation, Peer Evaluation
Miguel Blázquez-Carretero – ReCALL, 2023
In 2016, Lawley proposed an easy-to-build spellchecker specifically designed to help second language (L2) learners in their writing process by facilitating self-correction. The aim was to overcome the disadvantages to L2 learners posed by generic spellcheckers (GSC), such as that embedded in Microsoft Word. Drawbacks include autocorrection,…
Descriptors: Second Language Learning, Spanish, Spelling, Error Correction
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Wang, Y.; Harrington, M.; White, P. – Journal of Computer Assisted Learning, 2012
This paper introduces "CTutor", an automated writing evaluation (AWE) tool for detecting breakdowns in local coherence and reports on a study that applies it to the writing of Chinese L2 English learners. The program is based on Centering theory (CT), a theory of local coherence and salience. The principles of CT are first introduced and…
Descriptors: Foreign Countries, Educational Technology, Expertise, Feedback (Response)
On the Reliability and Validity of Human and LSA-Based Evaluations of Complex Student-Authored Texts
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Ware, Paige – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…
Descriptors: College Students, Writing Instruction, Writing Evaluation, Scoring
Investigating Learner Variability: The Impact of Task Type on Language Learners' Errors and Mistakes
Thouesny, Sylvie – CALICO Journal, 2010
In a project-based approach to teaching a foreign language at the university level, students are often required to participate in several task-based writing activities. In doing so, language learners not only write incorrect forms, but also correct forms of the same structures, both of which provide useful information on their strengths and…
Descriptors: French, College Instruction, Case Studies, Language Proficiency
Dikli, Semire – Online Submission, 2006
The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…
Descriptors: Scoring, Essays, Computer Uses in Education, Writing Evaluation
Hyland, Theresa Ann – Assessing Writing, 2009
Current concerns about academic plagiarism in student writing assume qualitative and quantitative differences in the writing of students for whom English is a first language (EL1) and English is a second language (EL2), but lack precision in measuring those differences. I examined the citation practices of EL1 and EL2 students in a timed writing…
Descriptors: Intellectual Property, Prior Learning, Rating Scales, Citations (References)
McCollum, Kelly – Chronicle of Higher Education, 1998
Developers of the Intelligent Essay Assessor claim that it saves time in evaluating college students' essays and improves the assessment. This and the growing number of other automated grading programs use the same technologies that make computer-based tutoring possible. Many academics remain skeptical of grading technologies, citing the…
Descriptors: Computer Assisted Testing, Computer Software, Computer Uses in Education, Essays

van der Geest, Thea; Remmers, Tim – Computers and Composition, 1994
Examines the use of Prep-Editor, a computer program to help undergraduate science writing students communicate with their peers about drafts. Finds that the program did not increase time spent on various writing activities. Notes that the Prep group reported a number of computer-related problems, whereas the non-Prep group reported more…
Descriptors: Computer Software Evaluation, Computer Uses in Education, Higher Education, Peer Evaluation
McKay, Martin D. – Book Report, 1998
Provides ideas for using computer technology in language arts classrooms, including learning the mechanics of writing; word choice; rewriting; small group formats; evaluation of writing; group-editing software; e-mail; writing for the Web; and hypertext. (LRW)
Descriptors: Computer Software, Computer Uses in Education, Courseware, Electronic Mail
Dunham, Trudy – 1987
This report describes the Learning Disabled College Writer's Project, implemented at the University of Minnesota during the 1985-86 school year and designed to aid learning disabled college students master composition skills through training in the use of microcomputer word processors. Following an executive summary, an introduction states the…
Descriptors: Computer Software, Computer Uses in Education, Freshman Composition, Higher Education