Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 16 |
Descriptor
| Computer Assisted Testing | 21 |
| Grading | 21 |
| Writing Evaluation | 21 |
| Essays | 13 |
| Computer Software | 8 |
| Evaluation Methods | 7 |
| Foreign Countries | 7 |
| Student Evaluation | 7 |
| Scoring | 6 |
| Educational Technology | 5 |
| Feedback (Response) | 5 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 19 |
| Reports - Research | 10 |
| Reports - Evaluative | 5 |
| Reports - Descriptive | 4 |
| Collected Works - Proceedings | 1 |
| Guides - Classroom - Teacher | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 7 |
| Postsecondary Education | 5 |
| Secondary Education | 4 |
| Elementary Education | 2 |
| Elementary Secondary Education | 1 |
| Grade 11 | 1 |
| Grade 6 | 1 |
| Intermediate Grades | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Dixon-Román, Ezekiel; Nichols, T. Philip; Nyame-Mensah, Ama – Learning, Media and Technology, 2020
In this article, we examine the sociopolitical implications of AI technologies as they are integrated into writing instruction and assessment. Drawing from new materialist and Black feminist thought, we consider how learning analytics platforms for writing are animated by and through entanglements of algorithmic reasoning, state standards and…
Descriptors: Racial Bias, Artificial Intelligence, Educational Technology, Writing Instruction
Aitken, Adam; Thompson, Darrall G. – International Journal of Technology and Design Education, 2018
First year undergraduate design students have found difficulties in realising the standards expected for academic writing at university level. An assessment initiative was used to engage students with criteria and standards for a core interdisciplinary design subject notable for its demanding assessment of academic writing. The same graduate…
Descriptors: Undergraduate Students, Design, Assignments, Computer Software
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
Kaufman, Julia H.; Schunn, Christian D. – Instructional Science: An International Journal of the Learning Sciences, 2011
We investigate students' negative perceptions about an online peer assessment system for undergraduate writing across the disciplines. Specifically, we consider the nature of students' resistance to peer assessment; what factors influence that resistance; and how students' perceptions impact their revision work. We do this work by first examining…
Descriptors: Feedback (Response), Writing Assignments, Student Attitudes, Peer Evaluation
He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh – Computers & Education, 2009
Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…
Descriptors: Semantics, Intelligent Tutoring Systems, Grading, Computer Assisted Testing
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Coniam, David – Educational Research and Evaluation, 2009
This paper describes a study comparing paper-based marking (PBM) and onscreen marking (OSM) in Hong Kong utilising English language essay scripts drawn from the live 2007 Hong Kong Certificate of Education Examination (HKCEE) Year 11 English Language Writing Paper. In the study, 30 raters from the 2007 HKCEE Writing Paper marked on paper 100…
Descriptors: Student Attitudes, Foreign Countries, Essays, Comparative Analysis
Lai, Yi-hsiu – British Journal of Educational Technology, 2010
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Descriptors: Feedback (Response), Writing Evaluation, Peer Evaluation, Grading
Peer reviewedPedersen, Elray L. – CALICO Journal, 1983
A computer program in BASIC developed to assist in grading college English compositions is discussed and presented. The program allows rapid recording and printing of instructor comments, providing more praise and feedback than otherwise feasible. Teachers are encouraged to adapt the program to their own needs and preferences. (MSE)
Descriptors: Computer Assisted Testing, English Instruction, Equipment, Grading
Previous Page | Next Page »
Pages: 1 | 2
Direct link
