Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 12 |
Descriptor
Computer Assisted Testing | 13 |
Evaluation Methods | 13 |
Writing Tests | 13 |
Writing Evaluation | 10 |
Scoring | 8 |
Essays | 6 |
English (Second Language) | 4 |
Correlation | 3 |
Educational Technology | 3 |
Essay Tests | 3 |
Foreign Countries | 3 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 6 |
Reports - Research | 5 |
Books | 1 |
Collected Works - General | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 5 |
Elementary Secondary Education | 4 |
Secondary Education | 3 |
Elementary Education | 2 |
Postsecondary Education | 2 |
Adult Education | 1 |
Grade 8 | 1 |
Two Year Colleges | 1 |
Audience
Practitioners | 1 |
Location
Australia | 1 |
California | 1 |
Canada (Toronto) | 1 |
Hong Kong | 1 |
Texas | 1 |
Utah | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Behizadeh, Nadia; Lynch, Tom Liam – Berkeley Review of Education, 2017
For the last century, the quality of large-scale assessment in the United States has been undermined by narrow educational theory and hindered by limitations in technology. As a result, poor assessment practices have encouraged low-level instructional practices that disparately affect students from the most disadvantaged communities and schools.…
Descriptors: Equal Education, Measurement, Educational Theories, Evaluation Methods
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Wang, Jinhao; Brown, Michelle Stallone – Contemporary Issues in Technology and Teacher Education (CITE Journal), 2008
The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…
Descriptors: Scoring, Essays, Computer Assisted Testing, Sentence Structure
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Thomas, P. L. – English Journal, 2005
The advantages and limitations of using computers in writing assessment and instruction are discussed. The English teachers feel that even though computers and computer programs offer huge benefits for the teaching of writing to students, they cannot be used as a substitute for humans in the ultimate evaluation of a composition written by them.
Descriptors: Writing Evaluation, Writing Tests, High Stakes Tests, English Teachers
Li, Jiang – Assessing Writing, 2006
The present study investigated the influence of word processing on the writing of students of English as a second language (ESL) and on writing assessment as well. Twenty-one adult Mandarin-Chinese speakers with advanced English proficiency living in Toronto participated in the study. Each participant wrote two comparable writing tasks under…
Descriptors: Writing Evaluation, Protocol Analysis, Writing Tests, Evaluation Methods
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness