NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Practical Assessment,…21
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mayo Beltrán, Alba Mª; Fernández Sánchez, María Jesús; Montanero Fernández, Manuel; Martín Parejo, David – Practical Assessment, Research & Evaluation, 2022
This study compares the effects of two resources, a paper rubric (CR) or the comment bubbles from a word processor (CCB), to support peer co-evaluation of expository texts in primary education. A total of 57 students wrote a text which, after a peer co-evaluation process, was rewritten. To analyze the improvements in the texts, we used a rubric…
Descriptors: Scoring Rubrics, Evaluation Methods, Word Processing, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schoepp, Kevin; Danaher, Maurice; Kranov, Ashley Ater – Practical Assessment, Research & Evaluation, 2018
Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment.…
Descriptors: Scoring Rubrics, Norms, Higher Education, Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ackermans, Kevin; Rusman, Ellen; Nadolski, Rob; Brand-Gruwel, Saskia; Specht, Marcus – Practical Assessment, Research & Evaluation, 2021
High-quality elaborative peer feedback is a blessing for both learners and teachers. However, learners can experience difficulties in giving high-quality feedback on complex skills using textual analytic rubrics. High-quality elaborative feedback can be strengthened by adding video-modeling examples with embedded self-explanation prompts, turning…
Descriptors: Feedback (Response), Video Technology, Scoring Rubrics, Peer Relationship
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jescovitch, Lauren N.; Scott, Emily E.; Cerchiara, Jack A.; Doherty, Jennifer H.; Wenderoth, Mary Pat; Merrill, John E.; Urban-Lurain, Mark; Haudek, Kevin C. – Practical Assessment, Research & Evaluation, 2019
Constructed responses can be used to assess the complexity of student thinking and can be evaluated using rubrics. The two most typical rubric types used are holistic and analytic. Holistic rubrics may be difficult to use with expert-level reasoning that has additive or overlapping language. In an attempt to unpack complexity in holistic rubrics…
Descriptors: Scoring Rubrics, Measurement, Logical Thinking, Scientific Concepts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rusman, Ellen; Dirkx, Kim – Practical Assessment, Research & Evaluation, 2017
Many schools use analytic rubrics to (formatively) assess complex, generic or transversal (21st century) skills, such as collaborating and presenting. In rubrics, performance indicators on different levels of mastering a skill (e.g., novice, practiced, advanced, talented) are described. However, the dimensions used to describe the different…
Descriptors: Mastery Learning, Scoring Rubrics, Formative Evaluation, Skill Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Borowiec, Katrina; Castle, Courtney – Practical Assessment, Research & Evaluation, 2019
Rater cognition or "think-aloud" studies have historically been used to enhance rater accuracy and consistency in writing and language assessments. As assessments are developed for new, complex constructs from the "Next Generation Science Standards (NGSS)," the present study illustrates the utility of extending…
Descriptors: Evaluators, Scoring, Scoring Rubrics, Protocol Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Szafran, Robert F. – Practical Assessment, Research & Evaluation, 2017
Institutional assessment of student learning objectives has become a fact-of-life in American higher education and the Association of American Colleges and Universities' (AAC&U) VALUE Rubrics have become a widely adopted evaluation and scoring tool for student work. As faculty from a variety of disciplines, some less familiar with the…
Descriptors: Interrater Reliability, Case Studies, Scoring Rubrics, Behavioral Objectives
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Goldberg, Gail Lynn – Practical Assessment, Research & Evaluation, 2014
This article provides a detailed account of a rubric revision process to address seven common problems to which rubrics are prone: lack of consistency and parallelism; the presence of "orphan" and "widow" words and phrases; redundancy in descriptors; inconsistency in the focus of qualifiers; limited routes to partial credit;…
Descriptors: Scoring Rubrics, Engineering, Case Studies, Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jeong, Heejeong – Practical Assessment, Research & Evaluation, 2015
Rubrics not only document the scales and criteria of what is assessed, but can also represent the assessment construct of the developer. Rubrics display the key assessment criteria, and the simplicity or complexity of the rubric can illustrate the meaning associated with the score. For this study, five experienced teachers developed a rubric for…
Descriptors: Scoring Rubrics, English (Second Language), Second Language Learning, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Amrein-Beardsley, Audrey; Holloway-Libell, Jessica; Cirell, Anna Montana; Hays, Alice; Chapman, Kathryn – Practical Assessment, Research & Evaluation, 2015
There is something incalculable about teacher expertise and whether it can be observed, detected, quantified, and as per current educational policies, used as an accountability tool to hold America's public school teachers accountable for that which they do (or do not do well). In this commentary, authors (all of whom are former public school…
Descriptors: Accountability, Educational Change, Educational Policy, Expertise
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryla, Ed; Shelley, Gary; Trainor, William – Practical Assessment, Research & Evaluation, 2012
Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…
Descriptors: Program Effectiveness, Factor Analysis, Competence, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lovorn, Michael G.; Rezaei, Ali Reza – Practical Assessment, Research & Evaluation, 2011
Recent studies report that the use of rubrics may not improve the reliability of assessment if raters are not well trained on how to design and employ them effectively. The intent of this two-phase study was to test if training pre-service and new in-service teachers in the construction, use, and evaluation of rubrics would improve the reliability…
Descriptors: Scoring Rubrics, Training, Preservice Teacher Education, Inservice Teacher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Reynolds-Keefer, Laura – Practical Assessment, Research & Evaluation, 2010
In Andrade and Du (2005), the authors discuss the ways in which students perceive and use rubrics to support learning in the classroom. In an effort to further examine the impact of rubrics on student learning, this study explored how rubrics impacted students learning, as well as whether using rubrics influenced the likelihood that they would use…
Descriptors: Scoring Rubrics, Preservice Teacher Education, Preservice Teachers, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bresciani, Marilee J.; Oakleaf, Megan; Kolkhorst, Fred; Nebeker, Camille; Barlow, Jessica; Duncan, Kristin; Hickmott, Jessica – Practical Assessment, Research & Evaluation, 2009
The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was…
Descriptors: Interrater Reliability, Scoring Rubrics, Research Methodology, Research Projects
Peer reviewed Peer reviewed
Direct linkDirect link
Childs, Ruth A.; Ram, Anita; Xu, Yunmei – Practical Assessment, Research & Evaluation, 2009
Dual scaling, a variation of multidimensional scaling, can reveal the dimensions underlying scores, such as raters' judgments. This study illustrates the use of a dual scaling analysis with semi-structured interviews of raters to investigate the differences among the raters as captured by the dimensions. Thirty applications to a one-year…
Descriptors: Teacher Education Programs, Interviews, Multidimensional Scaling, Teacher Educators
Previous Page | Next Page »
Pages: 1  |  2