NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Teachers2
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cristina Menescardi; Aida Carballo-Fazanes; Núria Ortega-Benavent; Isaac Estevan – Journal of Motor Learning and Development, 2024
The Canadian Agility and Movement Skill Assessment (CAMSA) is a valid and reliable circuit-based test of motor competence which can be used to assess children's skills in a live or recorded performance and then coded. We aimed to analyze the intrarater reliability of the CAMSA scores (total, time, and skill score) and time measured, by comparing…
Descriptors: Interrater Reliability, Evaluators, Scoring, Psychomotor Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Minshew, Lana M.; Anderson, Janice L.; Bartlett, Kerry A. – Instructional Science: An International Journal of the Learning Sciences, 2022
Models and modeling are central to both scientific literacy and practices as demonstrated by the Next Generation Science Standards. Through a design-based research framework, we developed a model-based assessment (MBA) and associated rubric as tools for teachers to understand and support students in their conceptualization of the flow of energy…
Descriptors: Models, Scoring Rubrics, Ecology, Middle School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moloi, Qetelo; Kanjee, Anil – South African Journal of Education, 2021
The study reported on here contributes to the growing body of knowledge on the use of standard setting methods for improving the reporting and utility value of assessment results in South Africa as well as for addressing the conceptual shortcomings of the Curriculum and Assessment Policy Statement (CAPS) reporting framework. Using data from the…
Descriptors: Foreign Countries, Standard Setting (Scoring), Student Evaluation, Elementary School Students
Kim, Dong-In; Julian, Marc; Hermann, Pam – Online Submission, 2022
In test equating, one critical equating property is the group invariance property which indicates that the equating function used to convert performance on each alternate form to the reporting scale should be the same for various subgroups. To mitigate the impact of disrupted learning on the item parameters during the COVID-19 pandemic, a…
Descriptors: COVID-19, Pandemics, Test Format, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çifci, Musa; Kaplan, Kadir – Journal of Language and Linguistic Studies, 2020
This study aimed to develop "Caricature Creation Rubric" which can be used to evaluate the products produced by 6th grade students at the end of their caricature creation process and to make its validity and reliability studies. The criteria in the graded key were determined by using the "Caricature Literacy Module" prepared by…
Descriptors: Cartoons, Scoring Rubrics, Evaluation Methods, Student Evaluation
Burns, Elise; Frangiosa, David – Corwin, 2021
Great things happen when students are able to focus on their learning instead of their scores. However, assessment reform, including standards-based grading, remains a hotly debated issue in education. "Going Gradeless" shows that it is possible to teach and assess without the stress of traditional grading practices. Sharing their…
Descriptors: Grading, Student Evaluation, Evaluation Methods, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
O'Meara, Jodi – Corwin, 2011
Collecting, processing, and using assessment data to form instruction for each student can be overwhelming--especially with so much diversity in the classroom. Help is here, with this hands-on guide that brings together the two leading approaches to teaching students of varying abilities: Response to Instruction and Intervention (RTI) and…
Descriptors: Teaching Guides, Data Analysis, Middle School Teachers, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Coulter, Gail; Shavin, Karen; Gichuru, Margaret – Preventing School Failure, 2009
Children in general education are classified by measures of oral reading fluency (ORF) to determine the level of support needed for reading. In addition, teachers use ORF measures with children who receive special education services to determine whether they are making progress toward their reading goals. In this descriptive study, the authors…
Descriptors: Preservice Teachers, Reading Fluency, Scoring, Classification
Delacruz, Girlie Castro – ProQuest LLC, 2010
To investigate whether games may serve as useful formative assessment environments, this study examined, experimentally, the effects of two aspects of formative assessment on math achievement, game play, and help-seeking behaviors: (a) making assessment criteria explicit through the explanation of scoring rules and (b) incentivizing the use of…
Descriptors: Feedback (Response), Report Cards, Control Groups, Play
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Pi-Hsia; Lin, Yu-Fen; Hwang, Gwo-Jen – Educational Technology & Society, 2010
Ubiquitous computing and mobile technologies provide a new perspective for designing innovative outdoor learning experiences. The purpose of this study is to propose a formative assessment design for integrating PDAs into ecology observations. Three learning activities were conducted in this study. An action research approach was applied to…
Descriptors: Foreign Countries, Feedback (Response), Action Research, Observation
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
The main purpose of this study was to illustrate a polytomous IRT-based linking procedure that adjusts for rater variations. Test scores from two administrations of a statewide reading assessment were used. An anchor set of Year 1 students' constructed responses were rescored by Year 2 raters. To adjust for year-to-year rater variation in IRT…
Descriptors: Test Items, Measures (Individuals), Grade 8, Item Response Theory
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology…
Descriptors: Test Items, Writing Tests, Reading Tests, Measures (Individuals)
Wu, Margaret; Donovan, Jenny; Hutton, Penny; Lennon, Melissa – Ministerial Council on Education, Employment, Training and Youth Affairs (NJ1), 2008
In July 2001, the Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) agreed to the development of assessment instruments and key performance measures for reporting on student skills, knowledge and understandings in primary science. It directed the newly established Performance Measurement and Reporting Taskforce…
Descriptors: Foreign Countries, Scientific Literacy, Science Achievement, Comparative Analysis