NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Paul; Wilson, Joshua; Zhang, Mo; Li, Chen; van Rijn, Peter; Guo, Hongwen; Roth, Amanda; Winchester, Eowyn; Richter, Theresa – International Journal of Artificial Intelligence in Education, 2021
Educators need actionable information about student progress during the school year. This paper explores an approach to this problem in the writing domain that combines three measurement approaches intended for use in interim-assessment fashion: scenario-based assessments (SBAs), to simulate authentic classroom tasks, automated writing evaluation…
Descriptors: Vignettes, Writing Evaluation, Writing Improvement, Progress Monitoring
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Mo; Sinharay, Sandip – International Journal of Testing, 2022
This article demonstrates how recent advances in technology allow fine-grained analyses of candidate-produced essays, thus providing a deeper insight on writing performance. We examined how essay features, automatically extracted using natural language processing and keystroke logging techniques, can predict various performance measures using data…
Descriptors: At Risk Persons, Writing Achievement, Educational Technology, Writing Improvement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cao, Yi; Chen, Jianshen; Zhang, Mo; Li, Chen – ETS Research Report Series, 2020
Scenario-based writing assessment has two salient characteristics by design: a lead-in/essay scaffolding structure and a unified scenario/topic throughout. In this study, we examine whether the scenario-based assessment design would impact students' essay scores compared to its alternative conditions, which intentionally broke the scaffolding…
Descriptors: Writing Processes, Vignettes, Writing Evaluation, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Mo; Bennett, Randy E.; Deane, Paul; van Rijn, Peter W. – Educational Measurement: Issues and Practice, 2019
This study compared gender groups on the processes used in writing essays in an online assessment. Middle-school students from four grades responded to essays in two persuasive subgenres, argumentation and policy recommendation. Writing processes were inferred from four indicators extracted from students' keystroke logs. In comparison to males, on…
Descriptors: Gender Differences, Essays, Computer Assisted Testing, Persuasive Discourse
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Zhang, Mo; Deane, Paul; Bennett, Randy E. – Journal of Educational Data Mining, 2020
This study investigates the effects of a scenario-based assessment design on students' writing processes. An experimental data set consisting of four design conditions was used in which the number of scenarios (one or two) and the placement of the essay task with respect to the lead-in tasks (first vs. last) were varied. Students' writing…
Descriptors: Instructional Effectiveness, Vignettes, Writing Processes, Learning Analytics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Mo; van Rijn, Peter W.; Deane, Paul; Bennett, Randy E. – Educational Assessment, 2019
Writing from source text is critical for developing college-and-career readiness because it is required in advanced academic environments and many vocations. Scenario-based assessment (SBA) represents one approach to measuring this ability. In such assessment, the scenario presents an issue that the student is to read and write about. Before…
Descriptors: Writing Evaluation, Vignettes, Essays, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Paul; Song, Yi; van Rijn, Peter; O'Reilly, Tenaha; Fowles, Mary; Bennett, Randy; Sabatini, John; Zhang, Mo – Reading and Writing: An Interdisciplinary Journal, 2019
This paper presents a theoretical and empirical case for the value of scenario-based assessment (SBA) in the measurement of students' written argumentation skills. First, we frame the problem in terms of creating a reasonably efficient method of evaluating written argumentation skills, including for students at relatively low levels of competency.…
Descriptors: Vignettes, Writing Skills, Persuasive Discourse, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul; Zhang, Mo – ETS Research Report Series, 2015
In this report, we examine the feasibility of characterizing writing performance using process features derived from a keystroke log. Using data derived from a set of "CBAL"™ writing assessments, we examine the following research questions: (a) How stable are the keystroke timing and process features across testing occasions?; (b) How…
Descriptors: Writing Processes, Feasibility Studies, Research Reports, Writing Achievement