NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research13
Journal Articles11
Speeches/Meeting Papers2
Audience
Location
Maryland1
Spain1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Blaženka Divjak; Barbi Svetec; Damir Horvat – Journal of Computer Assisted Learning, 2024
Background: Sound learning design should be based on the constructive alignment of intended learning outcomes (LOs), teaching and learning activities and formative and summative assessment. Assessment validity strongly relies on its alignment with LOs. Valid and reliable formative assessment can be analysed as a predictor of students' academic…
Descriptors: Automation, Formative Evaluation, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Sebastião Quintas; Mathieu Balaguer; Julie Mauclair; Virginie Woisard; Julien Pinquier – International Journal of Language & Communication Disorders, 2024
Background: Perceptual measures such as speech intelligibility are known to be biased, variant and subjective, to which an automatic approach has been seen as a more reliable alternative. On the other hand, automatic approaches tend to lack explainability, an aspect that can prevent the widespread usage of these technologies clinically. Aims: In…
Descriptors: Speech Communication, Cancer, Human Body, Intelligibility
Peer reviewed Peer reviewed
Direct linkDirect link
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ryoo, Ji Hoon; Park, Sunhee; Suh, Hongwook; Choi, Jaehwa; Kwon, Jongkyum – SAGE Open, 2022
In the development of cognitive science understanding human intelligence and mind, measurement of cognitive ability has played a key role. To address the development in data scientific point of views related to cognitive neuroscience, there has been a demand of creating a measurement to capture cognition in short and repeated time periods. This…
Descriptors: Cognitive Ability, Psychometrics, Test Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
LaFlair, Geoffrey T.; Langenfeld, Thomas; Baig, Basim; Horie, André Kenji; Attali, Yigal; von Davier, Alina A. – Journal of Computer Assisted Learning, 2022
Background: Digital-first assessments leverage the affordances of technology in all elements of the assessment process--from design and development to score reporting and evaluation to create test taker-centric assessments. Objectives: The goal of this paper is to describe the engineering, machine learning, and psychometric processes and…
Descriptors: Computer Assisted Testing, Affordances, Scoring, Engineering
Peer reviewed Peer reviewed
Direct linkDirect link
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020
This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…
Descriptors: Test Construction, Automation, Heuristics, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Rao, Dhawaleswar; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2020
Automatic multiple choice question (MCQ) generation from a text is a popular research area. MCQs are widely accepted for large-scale assessment in various domains and applications. However, manual generation of MCQs is expensive and time-consuming. Therefore, researchers have been attracted toward automatic MCQ generation since the late 90's.…
Descriptors: Multiple Choice Tests, Test Construction, Automation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Martínez-Huertas, José Á.; Jastrzebska, Olga; Olmos, Ricardo; León, José A. – Assessment & Evaluation in Higher Education, 2019
Automated summary evaluation is proposed as an alternative to rubrics and multiple-choice tests in knowledge assessment. Inbuilt rubric is a recent Latent Semantic Analysis (LSA) method that implements rubrics in an artificially-generated semantic space. It was compared with classical LSA's cosine-based methods assessing knowledge in a…
Descriptors: Automation, Scoring Rubrics, Alternative Assessment, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Li, Haiying; Gobert, Janice; Dickler, Rachel – International Educational Data Mining Society, 2017
Scientific explanations, which include a claim, evidence, and reasoning (CER), are frequently used to measure students' deep conceptual understandings of science. In this study, we developed an automated scoring approach for the CER that students constructed as a part of virtual inquiry (e.g., formulating questions, analyzing data, and warranting…
Descriptors: Automation, Science Instruction, Inquiry, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supraja, S.; Hartman, Kevin; Tatinati, Sivanagaraja; Khong, Andy W. H. – International Educational Data Mining Society, 2017
Expertise in a domain of knowledge is characterized by a greater fluency for solving problems within that domain and a greater facility for transferring the structure of that knowledge to other domains. Deliberate practice and the feedback that takes place during practice activities serve as gateways for developing domain expertise. However, there…
Descriptors: Test Items, Outcomes of Education, Feedback (Response), Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M. – ETS Research Report Series, 2016
The "TextEvaluator"® text analysis tool is a fully automated text complexity evaluation tool designed to help teachers and other educators select texts that are consistent with the text complexity guidelines specified in the Common Core State Standards (CCSS). This paper provides an overview of the TextEvaluator measurement approach and…
Descriptors: Automation, Evaluation Methods, Reading Material Selection, Common Core State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Dirlikov, Benjamin; Younes, Laurent; Nebel, Mary Beth; Martinelli, Mary Katherine; Tiedemann, Alyssa Nicole; Koch, Carolyn A.; Fiorilli, Diana; Bastian, Amy J.; Denckla, Martha Bridge; Miller, Michael I.; Mostofsky, Stewart H. – Journal of Occupational Therapy, Schools & Early Intervention, 2017
This study presents construct validity for a novel automated morphometric and kinematic handwriting assessment, including (1) convergent validity, establishing reliability of automated measures with traditional manual-derived Minnesota Handwriting Assessment (MHA), and (2) discriminant validity, establishing that the automated methods distinguish…
Descriptors: Handwriting, Evaluation Methods, Children, Preadolescents