NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
On-Soon Lee – Journal of Pan-Pacific Association of Applied Linguistics, 2024
Despite the increasing interest in using AI tools as assistant agents in instructional settings, the effectiveness of ChatGPT, the generative pretrained AI, for evaluating the accuracy of second language (L2) writing has been largely unexplored in formative assessment. Therefore, the current study aims to examine how ChatGPT, as an evaluator,…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Crawford, Lindy; Smolkowski, Keith – Assessing Writing, 2008
Students in grades 5 and 8 completed a state writing assessment, and their first and final drafts on the extended writing portion of the test were copied and scored using the state writing rubric. The rubric consisted of three primary traits: Content and Organization, Style and Fluency, and Language Use. Scorers were blind to the study purpose and…
Descriptors: Writing Evaluation, Writing Tests, Grade 8, Grade 5
Peer reviewed Peer reviewed
Direct linkDirect link
Whithaus, Carl; Harrison, Scott B.; Midyette, Jeb – Assessing Writing, 2008
This article examines the influence of keyboarding versus handwriting in a high-stakes writing assessment. Conclusions are based on data collected from a pilot project to move Old Dominion University's Exit Exam of Writing Proficiency from a handwritten format into a dual-option format (i.e., the students may choose to handwrite or keyboard the…
Descriptors: Writing Evaluation, Handwriting, Pilot Projects, Writing Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness