NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 202516
Since 202433
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
This article aims to review and comment on the articles in this Special Issue and suggest directions for Singaporean evaluators to consider in the future. Based on the review, we found education evaluators in Singapore are familiar with up-to-date evaluation theories and approaches and successfully apply them. We also found that a substantial…
Descriptors: Foreign Countries, Evaluators, Educational Research, Futures (of Society)
Peer reviewed Peer reviewed
Direct linkDirect link
Eirik Bjorheim Abrahamsen; Vegard Moen; Jon Tømmerås Selvik – Discover Education, 2024
All students at Norwegian universities and colleges have the right to complain about ordinary grading decisions. When an appeal is made, two new examiners are appointed, at least one of whom should be external. The handling of appeals in the current system is to be blind, meaning that the examiners handling the complaint should not be aware of the…
Descriptors: Grading, Decision Making, College Students, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Catherine Davies; Holly Ingram – Research Evaluation, 2025
As part of the shift towards a more equitable research culture, funders are reconsidering traditional approaches to peer review. In doing so, they seek to minimize bias towards certain research ideas and researcher profiles, to ensure greater inclusion of disadvantaged groups, to improve review quality, to reduce burden, and to enable more…
Descriptors: Resource Allocation, Research, Culture, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Reeta Neittaanmäki; Iasonas Lamprianou – Language Testing, 2024
This article focuses on rater severity and consistency and their relation to major changes in the rating system in a high-stakes testing context. The study is based on longitudinal data collected from 2009 to 2019 from the second language (L2) Finnish speaking subtest in the National Certificates of Language Proficiency in Finland. We investigated…
Descriptors: Foreign Countries, Interrater Reliability, Evaluators, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Nils Myszkowski; Martin Storme – Journal of Creative Behavior, 2025
In the PISA 2022 creative thinking test, students provide a response to a prompt, which is then coded by human raters as no credit, partial credit, or full credit. Like many large-scale educational testing frameworks, PISA uses the generalized partial credit model (GPCM) as a response model for these ordinal ratings. In this paper, we show that…
Descriptors: Creative Thinking, Creativity Tests, Scores, Prompting
Peer reviewed Peer reviewed
Direct linkDirect link
Purin Thepsathit; Kamonwan Tangdhanakanond – International Journal of Music Education, 2024
This research aimed to develop formative assessment rubrics for enhancing students' performance on Thai percussion instruments using the Many-Facet Rasch Measurement Partial Credit model (MFRM-PCM). Samples were high-school students playing four types of Thai instruments and the raters who were qualified and properly trained. The research…
Descriptors: Formative Evaluation, Scoring Rubrics, Music Education, Musical Instruments
Peer reviewed Peer reviewed
Direct linkDirect link
Audrey Doyle; Marie Conroy Johnson; Dylan Scanlon; Anna Logan; Aishling Silke; Alan Gorman; Aoife Brennan; Catherine Furlong; Sarah O'Grady – European Journal of Teacher Education, 2024
In the context of the COVID-19 restrictions and the pivot to online teaching and learning, teacher educators were forced to consider new spaces for School Placement and the assessment of these new sites of practice. This paper explores the process of the redesigning of the assessment of school placement components from the perspective of ten…
Descriptors: Student Placement, Student Evaluation, Preservice Teacher Education, Preservice Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Katerina Guba; Angelika Tsivinskaya – Studies in Higher Education, 2025
This paper explores the regulators' perspective and demonstrates how legitimacy deficits of private universities outweigh performance results in decisions regarding university inspections. We examined the period when the regulator had an urgent claim on Russian universities, particularly during the campaign to 'clean the system of higher…
Descriptors: Private Sector, Private Education, Universities, Private Colleges
Peer reviewed Peer reviewed
Direct linkDirect link
Laura Schildt; Bart Deygers; Albert Weideman – Language Testing, 2024
In the context of policy-driven language testing for citizenship, a growing body of research examines the political justifications and ethical implications of language requirements and test use. However, virtually no studies have looked at the role that language testers play in the evolution of language requirements. Critical gaps remain in our…
Descriptors: Language Tests, Citizenship, Educational Policy, Assessment Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Ilona Rinne – Assessment & Evaluation in Higher Education, 2024
It is widely acknowledged in research that common criteria and aligned standards do not result in consistent assessment of such a complex performance as the final undergraduate thesis. Assessment is determined by examiners' understanding of rubrics and their views on thesis quality. There is still a gap in the research literature about how…
Descriptors: Foreign Countries, Undergraduate Students, Teacher Education Programs, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nazira Tursynbayeva; Umur Öç; Ismail Karakaya – International Journal of Assessment Tools in Education, 2024
This study aimed to measure the effect of rater training given to improve the peer assessment skills of secondary school students on rater behaviors using the many-facet Rasch Measurement model. The research employed a single-group pretest-posttest design. Since all raters scored all students, the analyses were carried out in a fully crossed (s x…
Descriptors: Evaluators, Training, Behavior, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Pablo Bezem; Anne Piezunka; Rebecca Jacobsen – Leadership and Policy in Schools, 2024
In an era of test-based accountability, school inspections can offer a more nuanced understanding of why schools fail. Yet, we have limited knowledge of how inspectors arrive at their decisions on school quality. Analyzing inspectors' decision-making can reveal the underlying views regarding school accountability and open opportunities for school…
Descriptors: Inspection, Decision Making, Accountability, Institutional Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Chao Han; Binghan Zheng; Mingqing Xie; Shirong Chen – Interpreter and Translator Trainer, 2024
Human raters' assessment of interpreting is a complex process. Previous researchers have mainly relied on verbal reports to examine this process. To advance our understanding, we conducted an empirical study, collecting raters' eye-movement and retrospection data in a computerised interpreting assessment in which three groups of raters (n = 35)…
Descriptors: Foreign Countries, College Students, College Graduates, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Audrey Doyle – Irish Educational Studies, 2025
For the first time in the history of the high stakes Leaving Certificate Established examination in Ireland, teachers graded and ranked their own students due to COVID-19 restrictions. In the wake of the process, a questionnaire and focus group interviews explored how teachers engaged with the Leaving Certificate Calculated Grades 2020 (CG2020)…
Descriptors: Foreign Countries, Exit Examinations, Teacher Role, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Cristina Menescardi; Aida Carballo-Fazanes; Núria Ortega-Benavent; Isaac Estevan – Journal of Motor Learning and Development, 2024
The Canadian Agility and Movement Skill Assessment (CAMSA) is a valid and reliable circuit-based test of motor competence which can be used to assess children's skills in a live or recorded performance and then coded. We aimed to analyze the intrarater reliability of the CAMSA scores (total, time, and skill score) and time measured, by comparing…
Descriptors: Interrater Reliability, Evaluators, Scoring, Psychomotor Skills
Previous Page | Next Page »
Pages: 1  |  2  |  3