NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20252
Since 202410
Since 2021 (last 5 years)39
Since 2016 (last 10 years)107
Since 2006 (last 20 years)132
Source
Grantee Submission132
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Meets WWC Standards with or without Reservations1
Showing 1 to 15 of 132 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Scott A. Crossley; Minkyung Kim; Quian Wan; Laura K. Allen; Rurik Tywoniw; Danielle S. McNamara – Grantee Submission, 2025
This study examines the potential to use non-expert, crowd-sourced raters to score essays by comparing expert raters' and crowd-sourced raters' assessments of writing quality. Expert raters and crowd-sourced raters scored 400 essays using a standardised holistic rubric and comparative judgement (pairwise ratings) scoring techniques, respectively.…
Descriptors: Writing Evaluation, Essays, Novices, Knowledge Level
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Qiao; Akihito Kamata; Cornelis Potgieter – Grantee Submission, 2024
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring.…
Descriptors: Oral Reading, Reading Fluency, Error Patterns, Scoring
Xin Qiao; Akihito Kamata; Cornelis Potgieter – Grantee Submission, 2023
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and to evaluate the effectiveness of interventions as curriculum-based measurements. As with other assessments, equating ORF scores becomes necessary when we want to compare ORF scores from different test forms. Recently, Kara et al. (2023) proposed a model-based…
Descriptors: Error of Measurement, Oral Reading, Reading Fluency, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Tiffany Wu; Christina Weiland; Meghan McCormick; JoAnn Hsueh; Catherine Snow; Jason Sachs – Grantee Submission, 2024
The Hearts and Flowers (H&F) task is a computerized executive functioning (EF) assessment that has been used to measure EF from early childhood to adulthood. It provides data on accuracy and reaction time (RT) across three different task blocks (hearts, flowers, and mixed). However, there is a lack of consensus in the field on how to score the…
Descriptors: Scoring, Executive Function, Kindergarten, Young Children
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Oddis, Kyle; Burstein, Jill; McCaffrey, Daniel F.; Holtzman, Steven L. – Grantee Submission, 2022
Background: Researchers interested in quantitative measures of student "success" in writing cannot control completely for contextual factors which are local and site-based (i.e., in context of a specific instructor's writing classroom at a specific institution). (In)ability to control for curriculum in studies of student writing…
Descriptors: Writing Instruction, Writing Achievement, Curriculum Evaluation, College Instruction
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCarthy, Kathryn S.; Magliano, Joseph P.; Snyder, Jacob O.; Kenney, Elizabeth A.; Newton, Natalie N.; Perret, Cecile A.; Knezevic, Melanie; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2021
The objective in the current paper is to examine the processes of how our research team negotiated meaning using an iterative design approach as we established, developed, and refined a rubric to capture comprehension processes and strategies evident in students' verbal protocols. The overarching project comprises multiple data sets, multiple…
Descriptors: Scoring Rubrics, Interrater Reliability, Design, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Zewei Tian; Lief Esbenshade; Alex Liu; Shawon Sarkar; Zachary Zhang; Kevin He; Min Sun – Grantee Submission, 2025
The Colleague AI platform introduces a groundbreaking Rubric Generation function designed to streamline how educators create and use rubrics for instructional and assessment purposes. This feature uses artificial intelligence (AI) to produce standards-based rubrics tailored to course content for formative and summative evaluations. By automating…
Descriptors: Scoring Rubrics, Artificial Intelligence, Futures (of Society), Teaching Methods
Crawford, Angela R.; Johnson, Evelyn S.; Zheng, Yuzhu; Moylan, Laura A. – Grantee Submission, 2020
This study describes the initial psychometric evaluation of the Understanding Procedures observation rubric for use as an instrument for feedback to teachers working in mathematics intervention settings. The rubric translates the research base from mathematics education and special education into practice in the form of specific items and…
Descriptors: Psychometrics, Scoring Rubrics, Feedback (Response), Mathematics Instruction
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9