Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 11 |
Descriptor
Writing Evaluation | 11 |
Essays | 7 |
Persuasive Discourse | 6 |
Writing Skills | 6 |
Middle School Students | 5 |
Scores | 5 |
Vignettes | 5 |
Writing Processes | 5 |
Grade 8 | 4 |
Automation | 3 |
Computer Assisted Testing | 3 |
More ▼ |
Source
Author
Zhang, Mo | 11 |
Deane, Paul | 6 |
Bennett, Randy E. | 3 |
Guo, Hongwen | 2 |
Li, Chen | 2 |
van Rijn, Peter | 2 |
van Rijn, Peter W. | 2 |
Bejar, Isaac I. | 1 |
Bennett, Randy | 1 |
Burstein, Jill | 1 |
Cao, Yi | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 10 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Junior High Schools | 6 |
Middle Schools | 6 |
Secondary Education | 6 |
Elementary Education | 4 |
Grade 8 | 4 |
Higher Education | 3 |
Postsecondary Education | 2 |
Adult Education | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Praxis Series | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Deane, Paul; Wilson, Joshua; Zhang, Mo; Li, Chen; van Rijn, Peter; Guo, Hongwen; Roth, Amanda; Winchester, Eowyn; Richter, Theresa – International Journal of Artificial Intelligence in Education, 2021
Educators need actionable information about student progress during the school year. This paper explores an approach to this problem in the writing domain that combines three measurement approaches intended for use in interim-assessment fashion: scenario-based assessments (SBAs), to simulate authentic classroom tasks, automated writing evaluation…
Descriptors: Vignettes, Writing Evaluation, Writing Improvement, Progress Monitoring
Zhang, Mo; Sinharay, Sandip – International Journal of Testing, 2022
This article demonstrates how recent advances in technology allow fine-grained analyses of candidate-produced essays, thus providing a deeper insight on writing performance. We examined how essay features, automatically extracted using natural language processing and keystroke logging techniques, can predict various performance measures using data…
Descriptors: At Risk Persons, Writing Achievement, Educational Technology, Writing Improvement
Cao, Yi; Chen, Jianshen; Zhang, Mo; Li, Chen – ETS Research Report Series, 2020
Scenario-based writing assessment has two salient characteristics by design: a lead-in/essay scaffolding structure and a unified scenario/topic throughout. In this study, we examine whether the scenario-based assessment design would impact students' essay scores compared to its alternative conditions, which intentionally broke the scaffolding…
Descriptors: Writing Processes, Vignettes, Writing Evaluation, Regression (Statistics)
Zhang, Mo; Bennett, Randy E.; Deane, Paul; van Rijn, Peter W. – Educational Measurement: Issues and Practice, 2019
This study compared gender groups on the processes used in writing essays in an online assessment. Middle-school students from four grades responded to essays in two persuasive subgenres, argumentation and policy recommendation. Writing processes were inferred from four indicators extracted from students' keystroke logs. In comparison to males, on…
Descriptors: Gender Differences, Essays, Computer Assisted Testing, Persuasive Discourse
Guo, Hongwen; Zhang, Mo; Deane, Paul; Bennett, Randy E. – Journal of Educational Data Mining, 2020
This study investigates the effects of a scenario-based assessment design on students' writing processes. An experimental data set consisting of four design conditions was used in which the number of scenarios (one or two) and the placement of the essay task with respect to the lead-in tasks (first vs. last) were varied. Students' writing…
Descriptors: Instructional Effectiveness, Vignettes, Writing Processes, Learning Analytics
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Zhang, Mo; van Rijn, Peter W.; Deane, Paul; Bennett, Randy E. – Educational Assessment, 2019
Writing from source text is critical for developing college-and-career readiness because it is required in advanced academic environments and many vocations. Scenario-based assessment (SBA) represents one approach to measuring this ability. In such assessment, the scenario presents an issue that the student is to read and write about. Before…
Descriptors: Writing Evaluation, Vignettes, Essays, Scores
Deane, Paul; Song, Yi; van Rijn, Peter; O'Reilly, Tenaha; Fowles, Mary; Bennett, Randy; Sabatini, John; Zhang, Mo – Reading and Writing: An Interdisciplinary Journal, 2019
This paper presents a theoretical and empirical case for the value of scenario-based assessment (SBA) in the measurement of students' written argumentation skills. First, we frame the problem in terms of creating a reasonably efficient method of evaluating written argumentation skills, including for students at relatively low levels of competency.…
Descriptors: Vignettes, Writing Skills, Persuasive Discourse, Writing Evaluation
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Deane, Paul; Zhang, Mo – ETS Research Report Series, 2015
In this report, we examine the feasibility of characterizing writing performance using process features derived from a keystroke log. Using data derived from a set of "CBAL"™ writing assessments, we examine the following research questions: (a) How stable are the keystroke timing and process features across testing occasions?; (b) How…
Descriptors: Writing Processes, Feasibility Studies, Research Reports, Writing Achievement