NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Minkyung Cho; Young-Suk Grace Kim; Jiali Wang – Scientific Studies of Reading, 2023
This study examined the extent of perspective taking and language features represented in secondary students' text-based analytical writing. We investigated: (1) whether perspective taking is related to writing quality, accounting for language features in writing; (2) whether students' English learner status is related to perspectives represented…
Descriptors: Perspective Taking, Secondary School Students, English Language Learners, English
Minkyung Cho; Young-Suk Grace Kim; Jiali Wang – Grantee Submission, 2022
This study examined the extent of perspective taking and language features represented in secondary students' text-based analytical writing. We investigated (1) whether perspective taking is related to writing quality, accounting for language features in writing; (2) whether students' English learner status is related to perspectives represented…
Descriptors: Perspective Taking, Secondary School Students, English Language Learners, English
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckstein, Grant; Schramm, Wesley; Noxon, Madeline; Snyder, Jenna – TESL-EJ, 2019
Researchers have found numerous differences in the approaches raters take to the complex task of essay rating including differences when rating native (L1) and non-native (L2) English writing. Yet less is known about raters' reading practices while scoring those essays. This small-scale study uses eye-tracking technology and reflective protocols…
Descriptors: Eye Movements, Native Language, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ma, Hong; Slater, Tammy – CALICO Journal, 2016
This study utilized a theory proposed by Mohan, Slater, Luo, and Jaipal (2002) regarding the Developmental Path of Cause to investigate AWE score use in classroom contexts. This "path" has the potential to support validity arguments because it suggests how causal linguistic features can be organized in hierarchical order. Utilization of…
Descriptors: Scores, Automation, Writing Evaluation, Computer Assisted Testing
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shim, Yae Jie – Teaching English with Technology, 2013
The error-correction program "Criterion" provides students with an immediate essay feedback using tools that can analyze and review writing automatically. This feedback covers grammar, usage, mechanics, style, organization, and development. With its diagnostic tools for scoring essays and offering relevant feedback, the error-correction…
Descriptors: Error Correction, Writing Evaluation, Essays, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Powers, Don – ETS Research Report Series, 2008
This report describes the development of grade norms for timed-writing performance in two modes of writing: persuasive and descriptive. These norms are based on objective and automatically computed measures of writing quality in grammar, usage, mechanics, style, vocabulary, organization, and development. These measures are also used in the…
Descriptors: Grade 4, Grade 6, Grade 8, Grade 10
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Gagne, Phill; Lissitz, Robert W. – Educational Measurement: Issues and Practice, 2005
An assumption that is fundamental to the scoring of student-constructed responses (e.g., essays) is the ability of raters to focus on the response characteristics of interest rather than on other features. A common example, and the focus of this study, is the ability of raters to score a response based on the content achievement it demonstrates…
Descriptors: Scoring, Language Usage, Effect Size, Student Evaluation
Previous Page | Next Page »
Pages: 1  |  2