Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 20 |
Descriptor
Computer Assisted Testing | 20 |
Scoring | 15 |
Essays | 14 |
Essay Tests | 13 |
Writing Evaluation | 11 |
Evaluation Methods | 9 |
Writing Tests | 9 |
Educational Technology | 7 |
Automation | 6 |
Computer Software | 6 |
Educational Testing | 6 |
More ▼ |
Source
Author
Darling-Hammond, Linda | 2 |
Alexander, R. Curby | 1 |
Attali, Yigal | 1 |
Behizadeh, Nadia | 1 |
Burke, Jennifer N. | 1 |
Cizek, Gregory J. | 1 |
Condon, William | 1 |
Coniam, David | 1 |
Deane, Paul | 1 |
Deng, Hui | 1 |
DiVesta, Francis J. | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 7 |
Reports - Research | 7 |
Reports - Descriptive | 4 |
Books | 1 |
Collected Works - General | 1 |
Dissertations/Theses -… | 1 |
Guides - Non-Classroom | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 20 |
Higher Education | 9 |
Postsecondary Education | 8 |
Secondary Education | 7 |
High Schools | 5 |
Elementary Education | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 6 | 1 |
More ▼ |
Audience
Administrators | 1 |
Teachers | 1 |
Location
Australia | 2 |
Connecticut | 2 |
New Hampshire | 2 |
New York | 2 |
Rhode Island | 2 |
United Kingdom (England) | 2 |
Vermont | 2 |
Hong Kong | 1 |
Singapore | 1 |
Spain | 1 |
United Kingdom | 1 |
More ▼ |
Laws, Policies, & Programs
Every Student Succeeds Act… | 2 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 4 |
New York State Regents… | 2 |
Graduate Record Examinations | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Behizadeh, Nadia; Lynch, Tom Liam – Berkeley Review of Education, 2017
For the last century, the quality of large-scale assessment in the United States has been undermined by narrow educational theory and hindered by limitations in technology. As a result, poor assessment practices have encouraged low-level instructional practices that disparately affect students from the most disadvantaged communities and schools.…
Descriptors: Equal Education, Measurement, Educational Theories, Evaluation Methods
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Hixson, Nate; Rhudy, Vaughn – West Virginia Department of Education, 2012
To provide an opportunity for teachers to better understand the automated scoring process used by the state of West Virginia on our annual West Virginia Educational Standards Test 2 (WESTEST 2) Online Writing Assessment, the West Virginia Department of Education (WVDE) Office of Assessment and Accountability and the Office of Research conduct an…
Descriptors: Writing Tests, Computer Assisted Testing, Automation, Scoring
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Garcia Laborda, Jesus; Magal Royo, Teresa; Enriquez Carrasco, Emilia – Online Submission, 2010
This paper presents the results of writing processing among 260 high school senior students, their degree of satisfaction using the new trial version of the Computer Based University Entrance Examination in Spain and their degree of motivation towards written online test tasks. Currently, this is one of the closing studies to verify whether…
Descriptors: Foreign Countries, Curriculum Development, High Stakes Tests, Student Motivation
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Kite, Mary E., Ed. – Society for the Teaching of Psychology, 2012
This book compiles several essays about effective evaluation of teaching. Contents of this publication include: (1) Conducting Research on Student Evaluations of Teaching (William E. Addison and Jeffrey R. Stowell); (2) Choosing an Instrument for Student Evaluation of Instruction (Jared W. Keeley); (3) Formative Teaching Evaluations: Is Student…
Descriptors: Feedback (Response), Student Evaluation of Teacher Performance, Online Courses, Teacher Effectiveness
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt – Journal of Interactive Learning Research, 2012
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
Descriptors: Feedback (Response), Scripts, Formative Evaluation, Essays
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Previous Page | Next Page ยป
Pages: 1 | 2