Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 29 |
Descriptor
Source
Author
Deluzain, Edward | 2 |
Gearhart, Maryl | 2 |
Graham, Steve | 2 |
Anderson, Beverly L. | 1 |
Arnold Y. L. Wong | 1 |
Arter, Judith A. | 1 |
Attali, Yigal | 1 |
Baker, Eva L. | 1 |
Best, Linda | 1 |
Bhola, Dennison S. | 1 |
Bikowski, Dawn | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 14 |
Postsecondary Education | 8 |
Elementary Secondary Education | 4 |
Adult Education | 1 |
Secondary Education | 1 |
Location
Florida | 3 |
Australia | 2 |
Canada | 2 |
New Zealand | 2 |
Alabama | 1 |
Bangladesh | 1 |
California | 1 |
Canada (Toronto) | 1 |
Europe | 1 |
Germany | 1 |
Indiana | 1 |
More ▼ |
Laws, Policies, & Programs
Americans with Disabilities… | 1 |
Rehabilitation Act 1973… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Yves Bestgen – Applied Linguistics, 2024
Measuring lexical diversity in texts that have different lengths is problematic because length has a significant effect on the number of types a text contains, thus hampering any comparison. Treffers-Daller et al. (2018) recommended a simple solution, namely counting the number of types in a section of a given length that was extracted from the…
Descriptors: Language Variation, Second Language Learning, Essays, Writing Evaluation
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
Jae Q. J. Liu; Kelvin T. K. Hui; Fadi Al Zoubi; Zing Z. X. Zhou; Dino Samartzis; Curtis C. H. Yu; Jeremy R. Chang; Arnold Y. L. Wong – International Journal for Educational Integrity, 2024
The application of artificial intelligence (AI) in academic writing has raised concerns regarding accuracy, ethics, and scientific rigour. Some AI content detectors may not accurately identify AI-generated texts, especially those that have undergone paraphrasing. Therefore, there is a pressing need for efficacious approaches or guidelines to…
Descriptors: Artificial Intelligence, Investigations, Identification, Human Factors Engineering
Timothy Oleksiak – College Composition and Communication, 2020
If, as I argue, student-to-student peer review is animated by "improvement imperatives" that make peer review a form of what Lauren Berlant calls "cruel optimism," then rhetoric and composition will need to imagine theories and structures for peer review that do not repeat cruel attachments. I offer slow peer review as a…
Descriptors: Peer Evaluation, Writing Evaluation, Writing (Composition), Writing Assignments
Kat O'Meara – Journal of Response to Writing, 2022
Alternative approaches to assessment in education (many of which are linked to inclusive and antiracist pedagogies) are gaining in popularity across the board, from PK--12 to higher education (Esquivel, 2021; St. Amour, 2020). One such antiracist assessment strategy is using labor-based grading contracts (LBGCs), popularized by Inoue (2019; see…
Descriptors: Writing Evaluation, Grading, Alternative Assessment, Racism
Finch, Mary – English in Australia, 2021
Hattie and Timperley's (2007) model of effective feedback, widely used in teacher professional development, provides an easily-applied framework for thinking about the information contained in feedback. However, the model simplifies a complex phenomenon shaped in practice by interpersonal, disciplinary and institutional aspects. Examining the…
Descriptors: Feedback (Response), Writing Evaluation, Writing Instruction, Faculty Development
Chapelle, Carol A.; Voss, Erik – Language Learning & Technology, 2016
This review article provides an analysis of the research from the last two decades on the theme of technology and second language assessment. Based on an examination of the assessment scholarship published in "Language Learning & Technology" since its launch in 1997, we analyzed the review articles, research articles, book reviews,…
Descriptors: Educational Technology, Efficiency, Second Language Learning, Second Language Instruction
Salmani Nodoushan, Mohammad Ali – Online Submission, 2014
As a language skill, writing has had, still has and will continue to have an important role in shaping the scientific structure of human life in that it is the medium through which scientific content is stored, retained, and transmitted. It has therefore been a major concern for writing teachers and researchers to find a reliable method for…
Descriptors: Writing Skills, Writing Evaluation, Scoring, Holistic Approach
Ericsson, Patricia; Hunter, Leeann Downing; Macklin, Tialitha Michelle; Edwards, Elizabeth Sue – Composition Forum, 2016
Multimodal pedagogy is increasingly accepted among composition scholars. However, putting such pedagogy into practice presents significant challenges. In this profile of Washington State University's first-year composition program, we suggest a multi-vocal and multi-theoretical approach to addressing the challenges of multimodal pedagogy. Patricia…
Descriptors: Freshman Composition, Performance Factors, Educational Theories, Stakeholders
Graham, Steve – Literacy Research and Instruction, 2014
In this response to Burdick et al. (2013), the author describes two possible and perhaps even common reactions to the article by Burdick et al. (2013). Advocates such as Way, Davis, and Strain- Seymour (2008) will likely applaud the development of the Writing Ability Developmental Scale and the possible widespread use of computer-based writing…
Descriptors: Writing Evaluation, Evaluation Methods, Evaluation Research, Alternative Assessment
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
McNair, Daniel J.; Curry, Toi L. – Journal of Postsecondary Education and Disability, 2013
This review of current writing assessment practices focuses upon the adult population, an area significantly underrepresented within psychoeducational literature. As compared to other populations, such as K-12 students, there are few options for the practitioner wishing to evaluate adult writers by means of standardized assessment instruments.…
Descriptors: Writing Evaluation, College Students, Writing Skills, Evaluation Methods
Graham, Steve; Harris, Karen; Hebert, Michael – Carnegie Corporation of New York, 2011
During this decade there have been numerous efforts to identify instructional practices that improve students' writing. These include "Reading Next" (Biancarosa and Snow, 2004), which provided a set of instructional recommendations for improving writing, and "Writing Next" (Graham and Perin, 2007) and "Writing to Read" (Graham and Hebert, 2010),…
Descriptors: Writing Evaluation, Formative Evaluation, Writing Improvement, Writing Instruction
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines