Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Author
Publication Type
Reports - Descriptive | 8 |
Journal Articles | 6 |
Education Level
Adult Education | 1 |
Higher Education | 1 |
Secondary Education | 1 |
Audience
Location
Finland | 1 |
Florida | 1 |
Tennessee | 1 |
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sumner, Josh – Research-publishing.net, 2021
Comparative Judgement (CJ) has emerged as a technique that typically makes use of holistic judgement to assess difficult-to-specify constructs such as production (speaking and writing) in Modern Foreign Languages (MFL). In traditional approaches, markers assess candidates' work one-by-one in an absolute manner, assigning scores to different…
Descriptors: Holistic Approach, Student Evaluation, Comparative Analysis, Decision Making
Uto, Masaki; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2016
As an assessment method based on a constructivist approach, peer assessment has become popular in recent years. However, in peer assessment, a problem remains that reliability depends on the rater characteristics. For this reason, some item response models that incorporate rater parameters have been proposed. Those models are expected to improve…
Descriptors: Item Response Theory, Peer Evaluation, Bayesian Statistics, Simulation
Hill, Heather C.; Charalambous, Charalambos Y.; Kraft, Matthew A. – Educational Researcher, 2012
In recent years, interest has grown in using classroom observation as a means to several ends, including teacher development, teacher evaluation, and impact evaluation of classroom-based interventions. Although education practitioners and researchers have developed numerous observational instruments for these purposes, many developers fail to…
Descriptors: Generalizability Theory, Observation, Classroom Observation Techniques, Evaluation
Sawchuk, Stephen – Education Digest: Essential Readings Condensed for Quick Review, 2010
Most experts in the testing community have presumed that the $350 million promised by the U.S. Department of Education to support common assessments would promote those that made greater use of open-ended items capable of measuring higher-order critical-thinking skills. But as measurement experts consider the multitude of possibilities for an…
Descriptors: Educational Quality, Test Items, Comparative Analysis, Multiple Choice Tests
Michaelides, Michalis P.; Haertel, Edward H. – Center for Research on Evaluation Standards and Student Testing CRESST, 2004
There is variability in the estimation of an equating transformation because common-item parameters are obtained from responses of samples of examinees. The most commonly used standard error of equating quantifies this source of sampling error, which decreases as the sample size of examinees used to derive the transformation increases. In a…
Descriptors: Test Items, Testing, Error Patterns, Interrater Reliability
Vartiainen, Pirkko – Higher Education in Europe, 2005
This article analyses institutional evaluations of higher education in England and Finland through the concept of legitimacy. The focus of the article is on the institutional tendencies of legitimacy. This author's hypothesis is that evaluation is legitimate when the evaluation process is of a good quality and accepted both morally and in practice…
Descriptors: Institutional Evaluation, Higher Education, Foreign Countries, Comparative Analysis

Magin, D. J. – Assessment & Evaluation in Higher Education, 2001
Presents a novel application of analysis of variance (ANOVA) techniques to compare the reliability of multiple peer ratings with single teacher ratings. Uses rating data from two different courses, both involving multiple peer and individual teacher ratings that were used to assess student contributions to group process work. Discusses…
Descriptors: Analysis of Variance, Comparative Analysis, Cooperative Learning, Evaluation Methods
Christie, Christina A.; Azzam, Tarek – New Directions for Evaluation, 2005
The purpose of this issue of "New Directions for Evaluation" is to examine, comparatively, the practical application of theorists' approaches to evaluation by examining four evaluations of the same case. The thought is that when asked to evaluate the same program (holding the case constant), the practical distinctions between theorists' approaches…
Descriptors: Theory Practice Relationship, Interrater Reliability, Meta Analysis, Case Studies