NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan K. Schauber; Anne O. Olsen; Erik L. Werner; Morten Magelssen – Advances in Health Sciences Education, 2024
Introduction: Research in various areas indicates that expert judgment can be highly inconsistent. However, expert judgment is indispensable in many contexts. In medical education, experts often function as examiners in rater-based assessments. Here, disagreement between examiners can have far-reaching consequences. The literature suggests that…
Descriptors: Medical Students, Performance Based Assessment, Expertise, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Tavares, Walter; Kinnear, Benjamin; Schumacher, Daniel J.; Forte, Milena – Advances in Health Sciences Education, 2023
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to "improve" rater performance and contributions during assessment events. Historically, rater training programs have focused…
Descriptors: Medical Education, Interrater Reliability, Evaluation Methods, Training
Peer reviewed Peer reviewed
Direct linkDirect link
Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud – Advances in Health Sciences Education, 2018
Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…
Descriptors: Competence, Simulation, Allied Health Personnel, Certification
Peer reviewed Peer reviewed
Direct linkDirect link
Gingerich, Andrea; Ramlo, Susan E.; van der Vleuten, Cees P. M.; Eva, Kevin W.; Regehr, Glenn – Advances in Health Sciences Education, 2017
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is prevalent. The resulting "idiosyncratic rater variance" is considered to be unusable error of measurement in psychometric models and is a threat to the defensibility of our assessments. Prior studies of inter-rater variation in clinical…
Descriptors: Interrater Reliability, Error of Measurement, Psychometrics, Q Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Oudkerk Pool, Andrea; Govaerts, Marjan J. B.; Jaarsma, Debbie A. D. C.; Driessen, Erik W. – Advances in Health Sciences Education, 2018
While portfolios are increasingly used to assess competence, the validity of such portfolio-based assessments has hitherto remained unconfirmed. The purpose of the present research is therefore to further our understanding of how assessors form judgments when interpreting the complex data included in a competency-based portfolio. Eighteen…
Descriptors: Undergraduate Students, Medical Students, Medical Education, Competency Based Education
Peer reviewed Peer reviewed
Direct linkDirect link
Bajwa, Nadia M.; Yudkowsky, Rachel; Belli, Dominique; Vu, Nu Viet; Park, Yoon Soo – Advances in Health Sciences Education, 2017
The purpose of this study was to provide validity and feasibility evidence in measuring professionalism using the Professionalism Mini-Evaluation Exercise (P-MEX) scores as part of a residency admissions process. In 2012 and 2013, three standardized-patient-based P-MEX encounters were administered to applicants invited for an interview at the…
Descriptors: Graduate Medical Education, College Admission, College Entrance Examinations, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, William L.; Boulet, John; Sandella, Jeanne – Advances in Health Sciences Education, 2017
When the safety of the public is at stake, it is particularly relevant for licensing and credentialing exam agencies to use defensible standard setting methods to categorize candidates into competence categories (e.g., pass/fail). The aim of this study was to gather evidence to support change to the Comprehensive Osteopathic Medical Licensing-USA…
Descriptors: Standard Setting, Comparative Analysis, Clinical Experience, Skill Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Yoon Soo; Hyderi, Abbas; Bordage, Georges; Xing, Kuan; Yudkowsky, Rachel – Advances in Health Sciences Education, 2016
Recent changes to the patient note (PN) format of the United States Medical Licensing Examination have challenged medical schools to improve the instruction and assessment of students taking the Step-2 clinical skills examination. The purpose of this study was to gather validity evidence regarding response process and internal structure, focusing…
Descriptors: Interrater Reliability, Generalizability Theory, Licensing Examinations (Professions), Physicians
Peer reviewed Peer reviewed
Direct linkDirect link
Vink, Sylvia; van Tartwijk, Jan; Verloop, Nico; Gosselink, Manon; Driessen, Erik; Bolk, Jan – Advances in Health Sciences Education, 2016
To determine the content of integrated curricula, clinical concepts and the underlying basic science concepts need to be made explicit. Preconstructed concept maps are recommended for this purpose. They are mainly constructed by experts. However, concept maps constructed by residents are hypothesized to be less complex, to reveal more tacit basic…
Descriptors: Concept Mapping, Scientific Concepts, Integrated Curriculum, Articulation (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Timothy J. – Advances in Health Sciences Education, 2014
Medical education relies heavily on assessment formats that require raters to assess the competence and skills of learners. Unfortunately, there are often inconsistencies and variability in the scores raters assign. To ensure the scores from these assessment tools have validity, it is important to understand the underlying cognitive processes that…
Descriptors: Medical Education, Interrater Reliability, Cognitive Processes, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hatala, Rose; Cook, David A.; Brydges, Ryan; Hawkins, Richard – Advances in Health Sciences Education, 2015
In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected…
Descriptors: Measures (Individuals), Test Validity, Surgery, Skills
Peer reviewed Peer reviewed
Direct linkDirect link
de Lima, Alberto Alves; Conde, Diego; Costabel, Juan; Corso, Juan; Van der Vleuten, Cees – Advances in Health Sciences Education, 2013
Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the…
Descriptors: Test Reliability, Graduate Students, Medical Students, Vocational Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sherbino, Jonathan; Kulasegaram, Kulamakan; Worster, Andrew; Norman, Geoffrey R. – Advances in Health Sciences Education, 2013
The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an…
Descriptors: Foreign Countries, Medical Education, Medical Students, Medical Services
Peer reviewed Peer reviewed
Direct linkDirect link
Dankbaar, Mary E. W.; Alsma, Jelmer; Jansen, Els E. H.; van Merrienboer, Jeroen J. G.; van Saase, Jan L. C. M.; Schuit, Stephanie C. E. – Advances in Health Sciences Education, 2016
Simulation games are becoming increasingly popular in education, but more insight in their critical design features is needed. This study investigated the effects of fidelity of open patient cases in adjunct to an instructional e-module on students' cognitive skills and motivation. We set up a three-group randomized post-test-only design: a…
Descriptors: Experimental Groups, Thinking Skills, Computer Games, Motivation
Peer reviewed Peer reviewed
Direct linkDirect link
Todhunter, Sarah; Cruess, Sylvia R.; Cruess, Richard L.; Young, Meredith; Steinert, Yvonne – Advances in Health Sciences Education, 2011
One of the impediments to teaching professionalism is unprofessional behavior amongst clinical teachers. No method of reliably assessing the professional behavior of clinical teachers has yet been reported. The aim of this project was to develop and pilot such a tool. Thirty-four desirable professional behaviors in clinical teachers were…
Descriptors: Medical Students, Interrater Reliability, Factor Analysis, Faculty
Previous Page | Next Page ยป
Pages: 1  |  2