NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 809 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shun-Fu Hu; Amery D. Wu; Jake Stone – Journal of Educational Measurement, 2025
Scoring high-dimensional assessments (e.g., > 15 traits) can be a challenging task. This paper introduces the multilabel neural network (MNN) as a scoring method for high-dimensional assessments. Additionally, it demonstrates how MNN can score the same test responses to maximize different performance metrics, such as accuracy, recall, or…
Descriptors: Tests, Testing, Scores, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Sandip Sinharay – Journal of Educational Measurement, 2025
Although there exists an extensive amount of research on subscores and their properties, limited research has been conducted on categorical subscores and their interpretations. In this paper, we focus on the claim of Feinberg and von Davier that categorical subscores are useful for remediation and instructional purposes. We investigate this claim…
Descriptors: Tests, Scores, Test Interpretation, Alternative Assessment
Angela G. Cooley – ProQuest LLC, 2024
Educators rely on a conglomeration of assessment instruments to appraise, measure, monitor, and document evidence of students' academic readiness, learning progressions, acquisition of interdisciplinary knowledge, and extensive educational needs. The Mississippi Academic Assessment Program for Mathematics (MAAP-M) and the i-Ready Assessment…
Descriptors: Middle Schools, Grade 6, Grade 7, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren Prather; Nancy Creaghead; Jennifer Vannest; Lisa Hunter; Amy Hobek; Tamika Odum; Mekibib Altaye; Juanita Lackey – Perspectives of the ASHA Special Interest Groups, 2025
Purpose: The lack of appropriate assessments affects populations presumed to be most at risk for speech and language concerns, one of them being children with a history of preterm birth. This study aims to examine whether cultural bias is present in two currently available language tests for Black children under 3 years of age: the Communication…
Descriptors: African American Children, Premature Infants, Evaluation Methods, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
An, Lily Shiao; Ho, Andrew Dean; Davis, Laurie Laughlin – Educational Measurement: Issues and Practice, 2022
Technical documentation for educational tests focuses primarily on properties of individual scores at single points in time. Reliability, standard errors of measurement, item parameter estimates, fit statistics, and linking constants are standard technical features that external stakeholders use to evaluate items and individual scale scores.…
Descriptors: Documentation, Scores, Evaluation Methods, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Emre Zengin; Yasemin Karal – International Journal of Assessment Tools in Education, 2024
This study was carried out to develop a test to assess algorithmic thinking skills. To this end, the twelve steps suggested by Downing (2006) were adopted. Throughout the test development, 24 middle school sixth-grade students and eight experts in different areas took part as needed in the tasks on the project. The test was given to 252 students…
Descriptors: Grade 6, Algorithms, Thinking Skills, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelsey Nason; Christine E. DeMars – Research & Practice in Assessment, 2023
Universities administer assessments for accountability and program improvement. Student effort is low during assessments due to minimal perceived consequences. The effects of low effort are compounded by assessment context. This project investigates validity concerns caused by minimal effort and exacerbated by contextual factors. Systematic…
Descriptors: Test Validity, COVID-19, Pandemics, Environmental Influences
Steven Michael Carlo – Annenberg Institute for School Reform at Brown University, 2024
The National Assessment of Educational Progress (NAEP) has tested the civic, or citizenship knowledge of students across the nation at irregular intervals since its very inception. Despite advancements in reading and mathematics, evidenced by results from the National Assessment of Educational Progress (NAEP), civics proficiency has remained…
Descriptors: Civics, Scores, National Competency Tests, Citizenship Education
Peer reviewed Peer reviewed
Direct linkDirect link
Carly Oddleifson; Stephen Kilgus; David A. Klingbeil; Alexander D. Latham; Jessica S. Kim; Ishan N. Vengurlekar – Grantee Submission, 2025
The purpose of this study was to conduct a conceptual replication of Pendergast et al.'s (2018) study that examined the diagnostic accuracy of a nomogram procedure, also known as a naive Bayesian approach. The specific naive Bayesian approach combined academic and social-emotional and behavioral (SEB) screening data to predict student performance…
Descriptors: Bayesian Statistics, Accuracy, Social Emotional Learning, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren E. Bates; Sarah J. Myers; Edward L. DeLosh; Matthew G. Rhodes – Psychology Learning and Teaching, 2025
The present work assessed a quizzing method that combines the benefits of retrieval practice and feedback, whereby learners must continue taking quizzes until they achieve a perfect score with feedback provided (i.e., "mastery quizzing"). Across four experiments (n = 952; age 18-76, M = 37.10, SD = 11.61; 50% female, 48% male, 2% other…
Descriptors: Mastery Tests, Retention (Psychology), Evaluation Methods, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen M. Leach; Jason C. Immekus; Jeffrey C. Valentine; Prathiba Batley; Dena Dossett; Tamara Lewis; Thomas Reece – Assessment for Effective Intervention, 2025
Educators commonly use school climate survey scores to inform and evaluate interventions for equitably improving learning and reducing educational disparities. Unfortunately, validity evidence to support these (and other) score uses often falls short. In response, Whitehouse et al. proposed a collaborative, two-part validity testing framework for…
Descriptors: School Surveys, Measurement, Hierarchical Linear Modeling, Educational Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Michelle Y.; Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
This study introduces a novel differential item functioning (DIF) method based on propensity score matching that tackles two challenges in analyzing performance assessment data, that is, continuous task scores and lack of a reliable internal variable as a proxy for ability or aptitude. The proposed DIF method consists of two main stages. First,…
Descriptors: Probability, Scores, Evaluation Methods, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  54