Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 5 |
Descriptor
Evaluators | 5 |
Models | 5 |
Student Evaluation | 5 |
Grading | 3 |
Item Response Theory | 3 |
Accuracy | 2 |
College Faculty | 2 |
Foreign Countries | 2 |
Peer Evaluation | 2 |
Preservice Teachers | 2 |
Rating Scales | 2 |
More ▼ |
Source
Educational Assessment | 1 |
IEEE Transactions on Learning… | 1 |
International Journal of… | 1 |
Journal of University… | 1 |
Practitioner Research in… | 1 |
Author
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Audience
Location
Australia | 1 |
Ireland (Dublin) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nakayama, Minoru; Sciarrone, Filippo; Temperini, Marco; Uto, Masaki – International Journal of Distance Education Technologies, 2022
Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer assessment (PA) provides a powerful pedagogical strategy to support educational activities and foster learners' success, also where a huge number of learners is involved. Item response theory (IRT) can model students'…
Descriptors: Item Response Theory, Peer Evaluation, MOOCs, Models
Uto, Masaki; Nguyen, Duc-Thien; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2020
With the wide spread large-scale e-learning environments such as MOOCs, peer assessment has been popularly used to measure the learner ability. When the number of learners increases, peer assessment is often conducted by dividing learners into multiple groups to reduce the learner's assessment workload. However, in such cases, the peer assessment…
Descriptors: Item Response Theory, Electronic Learning, Peer Evaluation, Accuracy
O'Keeffe, Muireann; Gormley, Clare; Ferguson, Pip Bruce – Practitioner Research in Higher Education, 2018
Assessment is a crucial aspect of academic work. Indeed, there is substantial literature on assessment design and how to ensure the integrity of students' learning. Much work goes into enhancing assessment practices to ensure the validity of assessment to safeguard the reliability of students' knowledge. Yet relatively little research has…
Descriptors: Case Studies, Grading, Feedback (Response), Student Evaluation
Grainger, Peter R.; Christie, Michael; Carey, Michael – Journal of University Teaching and Learning Practice, 2019
Written communication skills are one of the most assessed criteria in higher education contexts, especially in humanities disciplines, including teacher education. There is a need to research and develop an assessment grading tool (i.e. criteria sheet or rubric) that would assist students in pre-service teacher education programs to better…
Descriptors: Writing Skills, Communication Skills, Models, Preservice Teachers
Wind, Stefanie A.; Engelhard, George, Jr.; Wesolowski, Brian – Educational Assessment, 2016
When good model-data fit is observed, the Many-Facet Rasch (MFR) model acts as a linking and equating model that can be used to estimate student achievement, item difficulties, and rater severity on the same linear continuum. Given sufficient connectivity among the facets, the MFR model provides estimates of student achievement that are equated to…
Descriptors: Evaluators, Interrater Reliability, Academic Achievement, Music Education