NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)3
Since 2007 (last 20 years)9
Audience
Researchers6
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Patel, Priya; Lee, Seungmin; Myers, Nicholas D.; Lee, Mei-Hua – Journal of Motor Learning and Development, 2021
Missing data incidents are common in experimental studies of motor learning and development. Inadequate handling of missing data may lead to serious problems, such as addition of bias, reduction in power, and so on. Thus, this study aimed to conduct a systematic review of the past (2007) and present (2017) practices used for reporting and…
Descriptors: Motor Development, Research Reports, Periodicals, Research Methodology
Albert M. Jimenez; Sally J. Zepeda – Sage Research Methods Cases, 2017
The work presented in this case study results from a study conducted in 2012-2014 examining a newly created teacher evaluation system to determine the inter-rater reliability of the classroom observation instrument. The teacher evaluation system was the result of a partnership between the school district and the university in the same city…
Descriptors: Case Studies, Interrater Reliability, Teacher Evaluation, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Lohmann, Sam; Diller, Karen R.; Phelps, Sue F. – portal: Libraries and the Academy, 2019
This case study discusses an assessment project in which a rubric was used to evaluate information literacy (IL) skills as reflected in undergraduate students' research papers. Subsequent analysis sought relationships between the students' IL skills and their contact with the library through various channels. The project proved far longer and more…
Descriptors: Performance Based Assessment, Information Literacy, Undergraduate Students, Research Papers (Students)
Peer reviewed Peer reviewed
Direct linkDirect link
Deygers, Bart; Van Gorp, Koen – Language Testing, 2015
Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…
Descriptors: Rating Scales, Scoring, Validity, Interrater Reliability
Peer reviewed Peer reviewed
Towstopiat, Olga – Contemporary Educational Psychology, 1984
The present article reviews the procedures that have been developed for measuring the reliability of human observers' judgments when making direct observations of behavior. These include the percentage of agreement, Cohen's Kappa, phi, and univariate and multivariate agreement measures that are based on quasi-equiprobability and quasi-independence…
Descriptors: Interrater Reliability, Mathematical Models, Multivariate Analysis, Observation
Guess, Doug; Roberts, Sally; Behrens, Gene Ann; Rues, Jane – American Journal on Mental Retardation, 1998
Responds to a critique by Mudford, Hogg, and Roberts (1997) that raised concerns about the observation code used in a longitudinal research project to assess emerging behavior state patterns in young children with disabilities. Concerns about the thoroughness of the reliability data collected by Mudford are discussed. (Author/CR)
Descriptors: Behavior Patterns, Data Collection, Data Interpretation, Disabilities
Santmire, Toni E. – 1984
The purpose of this paper is to discuss ways in which developmental psychology suffers from the lack of an appropriate technology of measurement and statistical analysis. The paper begins by noting that developmental psychology is the study of change; that individuals develop through a succession of "stages" which are separated by…
Descriptors: Data Analysis, Data Collection, Developmental Psychology, Developmental Stages
Peer reviewed Peer reviewed
Direct linkDirect link
Quigg, Mark; Lado, Fred A. – Journal of Continuing Education in the Health Professions, 2009
Introduction: The Accreditation Council for Continuing Medical Education (ACCME) provides guidelines for continuing medical education (CME) materials to mitigate problems in the independence or validity of content in certified activities; however, the process of peer review of materials appears largely unstudied and the reproducibility of…
Descriptors: Medical Education, Physicians, Conflict of Interest, Interrater Reliability
Webber, Larry; And Others – 1986
Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool.…
Descriptors: Behavior Rating Scales, Cardiovascular System, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Cordes, Anne K.; Ingham, Roger J. – Journal of Speech and Hearing Research, 1994
This paper reviews the prominent concepts of the stuttering event and concerns about the reliability of stuttering event measurements, specifically interjudge agreement. Recent attempts to resolve the stuttering measurement problem are reviewed, and the implications of developing an improved measurement system are discussed. (Author/JDD)
Descriptors: Data Collection, Interrater Reliability, Measurement Techniques, Observation
Peer reviewed Peer reviewed
Orwin, Robert G.; Cordray, David S. – Psychological Bulletin, 1985
Identifies three sources of reporting deficiency for meta-analytic results: quality (adequacy) of publicizing; quality of macrolevel reporting, and quality of microlevel reporting. Reanalysis of 25 reports from the Smith, Glass and Miller (1980) psychotherapy meta-analysis established two sources of misinformation, interrater reliabilities and…
Descriptors: Confidence Testing, Interrater Reliability, Meta Analysis, Psychotherapy
Peer reviewed Peer reviewed
Kreiman, Jody; And Others – Journal of Speech and Hearing Research, 1992
Sixteen listeners (10 expert, 6 naive) judged the dissimilarity of pairs of voices drawn from pathological and normal populations. Only parameters that showed substantial variability were perceptually salient across listeners. Results suggest that traditional means of assessing listener reliability in voice perception tasks may not be appropriate.…
Descriptors: Evaluation Methods, Individual Differences, Interrater Reliability, Perception
Peer reviewed Peer reviewed
Direct linkDirect link
Webb, Norman L. – Applied Measurement in Education, 2007
A process for judging the alignment between curriculum standards and assessments developed by the author is presented. This process produces information on the relationship of standards and assessments on four alignment criteria: Categorical Concurrence, Depth of Knowledge Consistency, Range of Knowledge Correspondence, and Balance of…
Descriptors: Educational Assessment, Academic Standards, Item Analysis, Interrater Reliability
Peer reviewed Peer reviewed
Flack, Virginia F.; And Others – Psychometrika, 1988
A method is presented for determining sample size that will achieve a pre-specified bound on confidence interval width for the interrater agreement measure "kappa." The same results can be used when a pre-specified power is desired for testing hypotheses about the value of kappa. (Author/SLD)
Descriptors: Evaluation Methods, Interrater Reliability, Research Methodology, Research Problems
Peer reviewed Peer reviewed
Kolevzon, Michael S.; And Others – Journal of Marital and Family Therapy, 1988
Employed triangulation strategy for assessing family interaction, involving family members, therapist, and coders independently viewing videotapes. Found weak agreement between paired assessments within family triad, and within therapist-coder dyad. Findings suggest that methodological and/or scaling strategies designed to maximize agreement may…
Descriptors: Counselor Attitudes, Evaluation Criteria, Evaluation Methods, Evaluation Problems
Previous Page | Next Page ยป
Pages: 1  |  2