NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Akif Avcu – Malaysian Online Journal of Educational Technology, 2025
This scope-review presents the milestones of how Hierarchical Rater Models (HRMs) become operable to used in automated essay scoring (AES) to improve instructional evaluation. Although essay evaluations--a useful instrument for evaluating higher-order cognitive abilities--have always depended on human raters, concerns regarding rater bias,…
Descriptors: Automation, Scoring, Models, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Hosnia M. M. Ahmed; Shaymaa E. Sorour – Education and Information Technologies, 2024
Evaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study…
Descriptors: Higher Education, Artificial Intelligence, Writing Evaluation, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Naima Debbar – International Journal of Contemporary Educational Research, 2024
Intelligent systems of essay grading constitute important tools for educational technologies. They can significantly replace the manual scoring efforts and provide instructional feedback as well. These systems typically include two main parts: a feature extractor and an automatic grading model. The latter is generally based on computational and…
Descriptors: Test Scoring Machines, Computer Uses in Education, Artificial Intelligence, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wijanarko, Bambang Dwi; Heryadi, Yaya; Toba, Hapnes; Budiharto, Widodo – Education and Information Technologies, 2021
Automated question generation is a task to generate questions from structured or unstructured data. The increasing popularity of online learning in recent years has given momentum to automated question generation in education field for facilitating learning process, learning material retrieval, and computer-based testing. This paper report on the…
Descriptors: Foreign Countries, Undergraduate Students, Engineering Education, Computer Software
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – International Journal of Artificial Intelligence in Education, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – Grantee Submission, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A. – ETS Research Report Series, 2016
The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…
Descriptors: Scoring, Models, Artificial Intelligence, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Boyer, Michelle – Applied Measurement in Education, 2017
Automated scoring systems are typically evaluated by comparing the performance of a single automated rater item-by-item to human raters. This presents a challenge when the performance of multiple raters needs to be compared across multiple items. Rankings could depend on specifics of the ranking procedure; observed differences could be due to…
Descriptors: Automation, Scoring, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Perin, Dolores; Lauterbach, Mark – International Journal of Artificial Intelligence in Education, 2018
The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored…
Descriptors: College Students, Writing Evaluation, Writing Skills, Developmental Studies Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Almond, Russell G. – International Journal of Testing, 2014
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Descriptors: Automation, Equated Scores, Writing Tests, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Previous Page | Next Page »
Pages: 1  |  2  |  3