NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Saxena, Akrati; Pei, Yulong; Pechenizkiy, Mykola – International Educational Data Mining Society, 2022
In Automated Essay Scoring (AES) systems, many previous works have studied group fairness using the demographic features of essay writers. However, individual fairness also plays an important role in fair evaluation and has not been yet explored. Initialized by Dwork et al., the fundamental concept of individual fairness is "similar people…
Descriptors: Scoring, Essays, Writing Evaluation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese, Joseph F. T.; Alonzo, Julie; Kamata, Akihito – Grantee Submission, 2016
The purpose of this study was to compare traditional oral reading fluency (ORF) measures to a computerized oral reading evaluation (CORE) system that uses speech recognition software. We applied a mixed model approach with two within-subject variables to test the mean WCPM score differences and the error rates between: passage length (25, 50, 85,…
Descriptors: Text Structure, Oral Reading, Reading Fluency, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Crossley, Scott A.; Kim, YouJin – Language Assessment Quarterly, 2019
The current study examined the effects of text-based relational (i.e., cohesion), propositional-specific (i.e., lexical), and syntactic features in a source text on subsequent integration of the source text in spoken responses. It further investigated the effects of word integration on human ratings of speaking performance while taking into…
Descriptors: Individual Differences, Syntax, Oral Language, Speech Communication
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Sheehan, Kathleen M. – ETS Research Report Series, 2015
The "TOEFL"® family of assessments includes the "TOEFL"® Primary"™, "TOEFL Junior"®, and "TOEFL iBT"® tests. The linguistic complexity of stimulus passages in the reading sections of the TOEFL family of assessments is expected to differ across the test levels. This study evaluates the linguistic…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ming; Li, Yi; Xu, Weiwei; Liu, Li – IEEE Transactions on Learning Technologies, 2017
Writing an essay is a very important skill for students to master, but a difficult task for them to overcome. It is particularly true for English as Second Language (ESL) students in China. It would be very useful if students could receive timely and effective feedback about their writing. Automatic essay feedback generation is a challenging task,…
Descriptors: Foreign Countries, College Students, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
White, Sheida; Kim, Young Yee; Chen, Jing; Liu, Fei – National Center for Education Statistics, 2015
This study examined whether or not fourth-graders could fully demonstrate their writing skills on the computer and factors associated with their performance on the National Assessment of Educational Progress (NAEP) computer-based writing assessment. The results suggest that high-performing fourth-graders (those who scored in the upper 20 percent…
Descriptors: National Competency Tests, Computer Assisted Testing, Writing Tests, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Mesmer, Heidi Anne; Hiebert, Elfrieda H. – Journal of Literacy Research, 2015
The Common Core State Standards for English Language Arts (CCSS/ELA) focus on building student capacity to read complex texts. The Standards provide an explicit text complexity staircase that maps text levels to grade levels. Furthermore, the Standards articulate a rationale to accelerate text levels across grades to ensure students are able to…
Descriptors: Elementary School Students, Grade 3, Reading Skills, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Ketterlin-Geller, Leanne R.; McCoy, Jan D.; Twyman, Todd; Tindal, Gerald – Assessment for Effective Intervention, 2006
Curriculum-based measurement is a system for monitoring students' progress and formatively evaluating instruction backed by 25 years of validation research. Most of this research has been conducted in elementary schools. In middle and high school classrooms, where there is an emphasis on mastering content knowledge, elementary-level measurements…
Descriptors: Curriculum Based Assessment, Academic Achievement, Cloze Procedure, Program Validation