NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 88 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tiffany Wu; Christina Weiland; Meghan McCormick; JoAnn Hsueh; Catherine Snow; Jason Sachs – Grantee Submission, 2024
The Hearts and Flowers (H&F) task is a computerized executive functioning (EF) assessment that has been used to measure EF from early childhood to adulthood. It provides data on accuracy and reaction time (RT) across three different task blocks (hearts, flowers, and mixed). However, there is a lack of consensus in the field on how to score the…
Descriptors: Scoring, Executive Function, Kindergarten, Young Children
Peer reviewed Peer reviewed
Direct linkDirect link
Plucker, Jonathan A. – Creativity Research Journal, 2023
In 1998, Plucker and Runco provided an overview of creativity assessment, noting current issues (fluency confounds, generality vs. specificity), recent advances (predictive validity, implicit theories), and promising future directions (moving beyond divergent thinking measures, reliance on batteries of assessments, translation into practice). In…
Descriptors: Creativity, Creativity Tests, Creative Thinking, Semantics
Selcuk Acar; Kelly Berthiaume; Katalin Grajzel; Denis Dumas; Charles Flemister; Peter Organisciak – Gifted Child Quarterly, 2023
In this study, we applied different text-mining methods to the originality scoring of the Unusual Uses Test (UUT) and Just Suppose Test (JST) from the Torrance Tests of Creative Thinking (TTCT)--Verbal. Responses from 102 and 123 participants who completed Form A and Form B, respectively, were scored using three different text-mining methods. The…
Descriptors: Creative Thinking, Creativity Tests, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Pamela R. Buckley; Katie Massey Combs; Karen M. Drewelow; Brittany L. Hubler; Marion Amanda Lain – Evaluation Review, 2025
As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers' ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of…
Descriptors: Middle School Students, Middle School Teachers, Evidence Based Practice, Program Development
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Benton, Tom – Research Matters, 2019
For many practical purposes, it is often assumed that the quality of a marker is directly related to their seniority. At its extreme, the assumption is that the most senior marker (the Principal Examiner) is always right even in cases where large numbers of junior markers have a collectively different opinion regarding the mark that should be…
Descriptors: Examiners, Scoring, Predictive Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Yik, Brandon J.; Dood, Amber J.; Cruz-Ramirez de Arellano, Daniel; Fields, Kimberly B.; Raker, Jeffrey R. – Chemistry Education Research and Practice, 2021
Acid-base chemistry is a key reaction motif taught in postsecondary organic chemistry courses. More specifically, concepts from the Lewis acid-base model are broadly applicable to understanding mechanistic ideas such as electron density, nucleophilicity, and electrophilicity; thus, the Lewis model is fundamental to explaining an array of reaction…
Descriptors: Artificial Intelligence, Models, Formative Evaluation, Organic Chemistry
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Benton, Tom – Cambridge Assessment, 2018
One of the questions with the longest history in educational assessment is whether it is possible to increase the reliability of a test simply by altering the way in which scores on individual test items are combined to make the overall test score. Most usually, the score available on each item is communicated to the candidate within a question…
Descriptors: Test Items, Scoring, Predictive Validity, Test Reliability
Neitzel, Jennifer; Early, Diane; Sideris, John; LaForrett, Doré; Abel, Michael B.; Soli, Margaret; Davidson, Dawn L.; Haboush-Deloye, Amanda; Hestenes, Linda L.; Jenson, Denise; Johnson, Cindy; Kalas, Jennifer; Mamrak, Angela; Masterson, Marie L.; Mims, Sharon U.; Oya, Patti; Philson, Bobbi; Showalter, Megan; Warner-Richter, Mallory; Kortright Wood, Jill – Journal of Early Childhood Research, 2019
The Early Childhood Environment Rating Scales, including the "Early Childhood Environment Rating Scale--Revised" (Harms et al., 2005) and the "Early Childhood Environment Rating Scale, Third Edition" (Harms et al., 2015) are the most widely used observational assessments in early childhood learning environments. The most recent…
Descriptors: Rating Scales, Early Childhood Education, Educational Quality, Scoring
Zeng, Songtian – ProQuest LLC, 2017
Over 30 states have adopted the Early Childhood Environmental Rating Scale-Revised (ECERS-R) as a component of their program quality assessment systems, but the use of ECERS-R on such a large scale has raised important questions about implementation. One of the most pressing question centers upon decisions users must make between two scoring…
Descriptors: Rating Scales, Scoring, Validity, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Finn, Bridgid; Wendler, Cathy; Ricker-Pedley, Kathryn L.; Arslan, Burcu – ETS Research Report Series, 2018
This report investigates whether the time between scoring sessions has an influence on operational and nonoperational scoring accuracy. The study evaluates raters' scoring accuracy on constructed-response essay responses for the "GRE"® General Test. Binomial linear mixed-effect models are presented that evaluate how the effect of various…
Descriptors: Intervals, Scoring, Accuracy, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gil, Verónica; de León, Sara C.; Jiménez, Juan E. – Reading & Writing Quarterly, 2021
The main objective of this study was to analyze what writing measures, or combination of measures, would be most appropriate for early detection of students at risk of presenting future difficulties in learning to write in Spanish, in terms of tasks, scoring indices (dependent, independent and accurate-precision) and duration. A sample of 231…
Descriptors: Screening Tests, Disability Identification, Curriculum Based Assessment, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6