Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 17 |
Descriptor
Computer Assisted Testing | 17 |
Writing Evaluation | 13 |
Essays | 12 |
Scores | 8 |
Scoring | 8 |
Writing Skills | 8 |
Elementary School Students | 7 |
Essay Tests | 7 |
Grade 5 | 6 |
Grade 8 | 6 |
Writing Tests | 6 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 14 |
Journal Articles | 13 |
Collected Works - Proceedings | 2 |
Books | 1 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 17 |
Middle Schools | 13 |
Secondary Education | 9 |
Junior High Schools | 8 |
Intermediate Grades | 7 |
Grade 5 | 6 |
Grade 8 | 6 |
Grade 6 | 4 |
Grade 4 | 3 |
Grade 7 | 3 |
Early Childhood Education | 2 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Wilson, Joshua; Wen, Huijing – Elementary School Journal, 2022
This study investigated fourth and fifth graders' metacognitive knowledge about writing and its relationship to writing performance to help identify areas that might be leveraged when designing effective writing instruction. Students' metacognitive knowledge was probed using a 30-minute informative writing prompt requiring students to teach their…
Descriptors: Elementary School Students, Metacognition, Writing Attitudes, Writing (Composition)
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Gerard, Libby F.; Linn, Marcia – AERA Online Paper Repository, 2016
We investigate how technologies that automatically score student written essays and assign individualized guidance can support student writing and revision in science. We used the automated scoring tools to assign guidance for student written essays in an online science unit, and studied how students revised their essays based on the guidance and…
Descriptors: Science Instruction, Technical Writing, Revision (Written Composition), Grade 7
Foxworth, Lauren L.; Hashey, Andrew; Sukhram, Diana P. – Reading & Writing Quarterly, 2019
In an age when students are increasingly expected to demonstrate technology-based writing proficiency, fluency challenges with word processing programs can pose a barrier to successful writing when students are asked to compose using these tools. The current study was designed to determine whether differences existed in typing fluency and digital…
Descriptors: Writing Skills, Students with Disabilities, Learning Disabilities, Word Processing
Laurie, Robert; Bridglall, Beatrice L.; Arseneault, Patrick – SAGE Open, 2015
The effect of using a computer or paper and pencil on student writing scores on a provincial standardized writing assessment was studied. A sample of 302 francophone students wrote a short essay using a computer equipped with Microsoft Word with all of its correction functions enabled. One week later, the same students wrote a second short essay…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Achievement
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Prvinchandar, Sunita; Ayub, Ahmad Fauzi Mohd – English Language Teaching, 2014
This study compared the effectiveness of two types of computer software for improving the English writing skills of pupils in a Malaysian primary school. Sixty students who participated in the seven-week training course were divided into two groups, with the experimental group using the StyleWriter software and the control group using the…
Descriptors: Writing Skills, Courseware, Writing Improvement, Elementary School Students
Shelley, Mack, Ed.; Akcay, Hakan, Ed.; Ozturk, Omer Tayfur, Ed. – International Society for Technology, Education, and Science, 2022
"Proceedings of International Conference on Research in Education and Science" includes full papers presented at the International Conference on Research in Education and Science (ICRES) which took place on March 24-27, 2022 in Antalya, Turkey. The aim of the conference is to offer opportunities to share ideas, to discuss theoretical and…
Descriptors: Educational Technology, Technology Uses in Education, Computer Peripherals, Equipment
Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N. – Learning Disabilities: A Contemporary Journal, 2014
The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…
Descriptors: Writing Skills, Writing Ability, Feedback (Response), Computer Mediated Communication
Burke, Jennifer N.; Cizek, Gregory J. – Assessing Writing, 2006
This study was conducted to gather evidence regarding effects of the mode of writing (handwritten vs. word-processed) on compositional quality in a sample of sixth grade students. Questionnaire data and essay scores were gathered to examine the effect of composition mode on essay scores of students of differing computer skill levels. The study was…
Descriptors: Computer Assisted Testing, High Stakes Tests, Writing Processes, Grade 6
Previous Page | Next Page »
Pages: 1 | 2