Publication Date
In 2025 | 2 |
Since 2024 | 9 |
Since 2021 (last 5 years) | 33 |
Since 2016 (last 10 years) | 62 |
Since 2006 (last 20 years) | 66 |
Descriptor
Automation | 73 |
Scores | 73 |
Scoring | 40 |
Computer Assisted Testing | 32 |
Feedback (Response) | 22 |
Foreign Countries | 15 |
Essays | 14 |
Writing Evaluation | 13 |
Correlation | 12 |
Artificial Intelligence | 11 |
Models | 11 |
More ▼ |
Source
Author
Wilson, Joshua | 4 |
Attali, Yigal | 2 |
Beard, Gaysha | 2 |
Belur, Vinetha | 2 |
Danielle S. McNamara | 2 |
Donnette Narine | 2 |
Huang, Yue | 2 |
Jenna W. Kramer | 2 |
Jing Liu | 2 |
Julie Cohen | 2 |
Lee, Hee-Sun | 2 |
More ▼ |
Publication Type
Education Level
Audience
Location
Brazil | 3 |
China | 3 |
Germany | 3 |
South Korea | 3 |
Australia | 2 |
Japan | 2 |
Asia | 1 |
California (San Francisco) | 1 |
Connecticut | 1 |
Denmark | 1 |
Egypt | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Program for the International… | 2 |
Test of English as a Foreign… | 2 |
Flesch Kincaid Grade Level… | 1 |
easyCBM | 1 |
What Works Clearinghouse Rating
Ercikan, Kadriye; McCaffrey, Daniel F. – Journal of Educational Measurement, 2022
Artificial-intelligence-based automated scoring is often an afterthought and is considered after assessments have been developed, resulting in nonoptimal possibility of implementing automated scoring solutions. In this article, we provide a review of Artificial intelligence (AI)-based methodologies for scoring in educational assessments. We then…
Descriptors: Artificial Intelligence, Automation, Scores, Educational Assessment
Owen Henkel; Hannah Horne-Robinson; Libby Hills; Bill Roberts; Josh McGrane – International Journal of Artificial Intelligence in Education, 2025
This paper reports on a set of three recent experiments utilizing large-scale speech models to assess the oral reading fluency (ORF) of students in Ghana. While ORF is a well-established measure of foundational literacy, assessing it typically requires one-on-one sessions between a student and a trained rater, a process that is time-consuming and…
Descriptors: Foreign Countries, Oral Reading, Reading Fluency, Literacy
Johnson, Matthew S.; Liu, Xiang; McCaffrey, Daniel F. – Journal of Educational Measurement, 2022
With the increasing use of automated scores in operational testing settings comes the need to understand the ways in which they can yield biased and unfair results. In this paper, we provide a brief survey of some of the ways in which the predictive methods used in automated scoring can lead to biased, and thus unfair automated scores. After…
Descriptors: Psychometrics, Measurement Techniques, Bias, Automation
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2021
Unit-weight sum scores (UWSSs) are routinely used as estimates of factor scores on the basis of solutions obtained with the nonlinear exploratory factor analysis (EFA) model for ordered-categorical responses. Theoretically, this practice results in a loss of information and accuracy, and is expected to lead to biased estimates. However, the…
Descriptors: Scores, Factor Analysis, Automation, Fidelity
Donnette Narine; Takashi Yamashita; Runcie C. W. Chidebe; Phyllis A. Cummins; Jenna W. Kramer; Rita Karam – Journal of Adult and Continuing Education, 2024
Job automation is a topical issue in a technology-driven labor market. However, greater amounts of human capital (e.g., often measured by education, and information-processing skills, including adult literacy) are linked with job security. A knowledgeable and skilled labor force better resists unemployment and/or rebounds from job disruption…
Descriptors: Human Capital, Automation, Job Security, Labor Force Development
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Hrubik, Jessica; Morgan, Denise N. – Middle Grades Research Journal, 2022
Providing timely and helpful writing feedback for student writers, especially those at the middle and high school level, can present an unwieldy challenge for teachers. Yet, feedback is necessary for students' growth as writers. There is an increased interest and use of automatic writing programs to provide students with writing feedback. However,…
Descriptors: Automation, Essays, Scores, Feedback (Response)
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Donnette Narine; Takashi Yamashita; Runcie C. W. Chidebe; Phyllis A. Cummins; Jenna W. Kramer; Rita Karam – Grantee Submission, 2023
Job automation is a topical issue in a technology-driven labor market. However, greater amounts of human capital (e.g., often measured by education, and information-processing skills, including adult literacy) are linked with job security. A knowledgeable and skilled labor force better resists unemployment and/or rebounds from job disruption…
Descriptors: Human Capital, Automation, Job Security, Labor Force Development