NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20241
Since 2021 (last 5 years)5
Since 2016 (last 10 years)15
Since 2006 (last 20 years)32
Source
Practical Assessment,…32
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Borowiec, Katrina; Castle, Courtney – Practical Assessment, Research & Evaluation, 2019
Rater cognition or "think-aloud" studies have historically been used to enhance rater accuracy and consistency in writing and language assessments. As assessments are developed for new, complex constructs from the "Next Generation Science Standards (NGSS)," the present study illustrates the utility of extending…
Descriptors: Evaluators, Scoring, Scoring Rubrics, Protocol Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahid A. Choudhry; Timothy J. Muckle; Christopher J. Gill; Rajat Chadha; Magnus Urosev; Matt Ferris; John C. Preston – Practical Assessment, Research & Evaluation, 2024
The National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA) conducted a one-year research study comparing performance on the traditional continued professional certification assessment, administered at a test center or online with remote proctoring, to a longitudinal assessment that required answering quarterly…
Descriptors: Nurses, Certification, Licensing Examinations (Professions), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wyse, Adam E. – Practical Assessment, Research & Evaluation, 2018
One common modification to the Angoff standard-setting method is to have panelists round their ratings to the nearest 0.05 or 0.10 instead of 0.01. Several reasons have been offered as to why it may make sense to have panelists round their ratings to the nearest 0.05 or 0.10. In this article, we examine one reason that has been suggested, which is…
Descriptors: Interrater Reliability, Evaluation Criteria, Scoring Formulas, Achievement Rating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Russell, Michael; Moncaleano, Sebastian – Practical Assessment, Research & Evaluation, 2020
Although both content alignment and standard-setting procedures rely on content-expert panel judgements, only the latter employs discussion among panel members. This study employed a modified form of the Webb methodology to examine content alignment for twelve tests administered as part of the Massachusetts Comprehensive Assessment System (MCAS).…
Descriptors: Test Content, Test Items, Discussion, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mayo Beltrán, Alba Mª; Fernández Sánchez, María Jesús; Montanero Fernández, Manuel; Martín Parejo, David – Practical Assessment, Research & Evaluation, 2022
This study compares the effects of two resources, a paper rubric (CR) or the comment bubbles from a word processor (CCB), to support peer co-evaluation of expository texts in primary education. A total of 57 students wrote a text which, after a peer co-evaluation process, was rewritten. To analyze the improvements in the texts, we used a rubric…
Descriptors: Scoring Rubrics, Evaluation Methods, Word Processing, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schoepp, Kevin; Danaher, Maurice; Kranov, Ashley Ater – Practical Assessment, Research & Evaluation, 2018
Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment.…
Descriptors: Scoring Rubrics, Norms, Higher Education, Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ackermans, Kevin; Rusman, Ellen; Nadolski, Rob; Brand-Gruwel, Saskia; Specht, Marcus – Practical Assessment, Research & Evaluation, 2021
High-quality elaborative peer feedback is a blessing for both learners and teachers. However, learners can experience difficulties in giving high-quality feedback on complex skills using textual analytic rubrics. High-quality elaborative feedback can be strengthened by adding video-modeling examples with embedded self-explanation prompts, turning…
Descriptors: Feedback (Response), Video Technology, Scoring Rubrics, Peer Relationship
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jescovitch, Lauren N.; Scott, Emily E.; Cerchiara, Jack A.; Doherty, Jennifer H.; Wenderoth, Mary Pat; Merrill, John E.; Urban-Lurain, Mark; Haudek, Kevin C. – Practical Assessment, Research & Evaluation, 2019
Constructed responses can be used to assess the complexity of student thinking and can be evaluated using rubrics. The two most typical rubric types used are holistic and analytic. Holistic rubrics may be difficult to use with expert-level reasoning that has additive or overlapping language. In an attempt to unpack complexity in holistic rubrics…
Descriptors: Scoring Rubrics, Measurement, Logical Thinking, Scientific Concepts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wilhelm, Anne Garrison; Gillespie Rouse, Amy; Jones, Francesca – Practical Assessment, Research & Evaluation, 2018
Although inter-rater reliability is an important aspect of using observational instruments, it has received little theoretical attention. In this article, we offer some guidance for practitioners and consumers of classroom observations so that they can make decisions about inter-rater reliability, both for study design and in the reporting of data…
Descriptors: Interrater Reliability, Measurement, Observation, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rusman, Ellen; Dirkx, Kim – Practical Assessment, Research & Evaluation, 2017
Many schools use analytic rubrics to (formatively) assess complex, generic or transversal (21st century) skills, such as collaborating and presenting. In rubrics, performance indicators on different levels of mastering a skill (e.g., novice, practiced, advanced, talented) are described. However, the dimensions used to describe the different…
Descriptors: Mastery Learning, Scoring Rubrics, Formative Evaluation, Skill Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Szafran, Robert F. – Practical Assessment, Research & Evaluation, 2017
Institutional assessment of student learning objectives has become a fact-of-life in American higher education and the Association of American Colleges and Universities' (AAC&U) VALUE Rubrics have become a widely adopted evaluation and scoring tool for student work. As faculty from a variety of disciplines, some less familiar with the…
Descriptors: Interrater Reliability, Case Studies, Scoring Rubrics, Behavioral Objectives
Previous Page | Next Page »
Pages: 1  |  2  |  3