Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 21 |
Since 2006 (last 20 years) | 56 |
Descriptor
Essay Tests | 102 |
Writing Tests | 102 |
Scoring | 37 |
Writing Evaluation | 37 |
Scores | 30 |
Computer Assisted Testing | 20 |
College Students | 19 |
English (Second Language) | 19 |
Writing Skills | 19 |
College Entrance Examinations | 18 |
Higher Education | 18 |
More ▼ |
Source
Author
Powers, Donald E. | 4 |
Deane, Paul | 3 |
Fowles, Mary E. | 3 |
Matter, M. Kevin | 3 |
Zhang, Mo | 3 |
Attali, Yigal | 2 |
Breland, Hunter M. | 2 |
Burstein, Jill | 2 |
Engelhard, George, Jr. | 2 |
Gyagenda, Ismail S. | 2 |
Higgins, Derrick | 2 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 3 |
Teachers | 3 |
Location
Iran | 4 |
Canada | 3 |
Florida | 3 |
Georgia | 2 |
Singapore | 2 |
Washington | 2 |
California | 1 |
Connecticut | 1 |
Ethiopia | 1 |
Hong Kong | 1 |
Indiana | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wheeler, Jordan M.; Engelhard, George; Wang, Jue – Measurement: Interdisciplinary Research and Perspectives, 2022
Objectively scoring constructed-response items on educational assessments has long been a challenge due to the use of human raters. Even well-trained raters using a rubric can inaccurately assess essays. Unfolding models measure rater's scoring accuracy by capturing the discrepancy between criterion and operational ratings by placing essays on an…
Descriptors: Accuracy, Scoring, Statistical Analysis, Models
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Yan, Xun; Chuang, Ping-Lin – Language Testing, 2023
This study employed a mixed-methods approach to examine how rater performance develops during a semester-long rater certification program for an English as a Second Language (ESL) writing placement test at a large US university. From 2016 to 2018, we tracked three groups of novice raters (n = 30) across four rounds in the certification program.…
Descriptors: Evaluators, Interrater Reliability, Item Response Theory, Certification
Arefsadr, Sajjad; Babaii, Esmat; Hashemi, Mohammad Reza – International Journal of Language Testing, 2022
This study explored possible reasons why IELTS candidates usually score low in writing by investigating the effects of two different test designs and scoring criteria on Iranian IELTS candidates' obtained grades in IELTS and World Englishes (WEs) essay writing tests. To this end, first, a WEs essay writing test was preliminarily designed. Then, 17…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Writing Evaluation
Sinharay, Sandip; Zhang, Mo; Deane, Paul – Applied Measurement in Education, 2019
Analysis of keystroke logging data is of increasing interest, as evident from a substantial amount of recent research on the topic. Some of the research on keystroke logging data has focused on the prediction of essay scores from keystroke logging features, but linear regression is the only prediction method that has been used in this research.…
Descriptors: Scores, Prediction, Writing Processes, Data Analysis
Weejeong Jeong – ProQuest LLC, 2022
This study is an investigation of the effects of linguistic features on quality of second language (L2) writers' essays for writing course placement at Indiana University Bloomington (IUB), and by implication at other universities and colleges. This study addresses the following research questions: (1) To what extent do selected linguistic…
Descriptors: Linguistics, Language Usage, Second Language Learning, College Students
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Gebrekidan, Habtamu; Zeru, Assefa – Cogent Education, 2023
Researches in general education as well as in language teaching have clearly shown the pivotal role conceptions of learning played on students' learning outcome. However, there are paucity of researches on instructional and assessment schemes that promote deep conceptions of writing. The main objective of this study, therefore, was to examine the…
Descriptors: Portfolio Assessment, English (Second Language), Second Language Learning, Second Language Instruction
Ray, Amber B.; Graham, Steve – Learning Disability Quarterly, 2021
High school students with high-incidence disabilities and struggling writers face considerable challenges when taking writing assessments designed for college entrance. This study examined the effectiveness of a writing intervention for improving students' performance on a college entrance exam, the writing assessment for the ACT. Students were…
Descriptors: High School Students, Students with Disabilities, Writing Difficulties, Writing Evaluation
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Choi, Ikkyu; Hao, Jiangang; Deane, Paul; Zhang, Mo – ETS Research Report Series, 2021
"Biometrics" are physical or behavioral human characteristics that can be used to identify a person. It is widely known that keystroke or typing dynamics for short, fixed texts (e.g., passwords) could serve as a behavioral biometric. In this study, we investigate whether keystroke data from essay responses can lead to a reliable…
Descriptors: Accuracy, High Stakes Tests, Writing Tests, Benchmarking
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Almond, Russell G. – International Journal of Testing, 2014
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Descriptors: Automation, Equated Scores, Writing Tests, Essay Tests
Ardolino, Piermatteo; Noventa, Stefano; Formicuzzi, Maddalena; Cubico, Serena; Favretto, Giuseppe – Higher Education: The International Journal of Higher Education Research, 2016
An observational study has been carried out to analyse differences in performance between students of different undergraduate curricula in the same written business administration examination, focusing particularly on possible effects of "integrated" or "multi-modular" examinations, a recently widespread format in Italian…
Descriptors: Business Administration, Undergraduate Study, Higher Education, Foreign Countries