NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matt Homer – Advances in Health Sciences Education, 2024
Quantitative measures of systematic differences in OSCE scoring across examiners (often termed examiner stringency) can threaten the validity of examination outcomes. Such effects are usually conceptualised and operationalised based solely on checklist/domain scores in a station, and global grades are not often used in this type of analysis. In…
Descriptors: Examiners, Scoring, Validity, Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Homer, Matt – Advances in Health Sciences Education, 2021
Variation in examiner stringency is an ongoing problem in many performance settings such as in OSCEs, and usually is conceptualised and measured based on scores/grades examiners award. Under borderline regression, the standard within a station is set using checklist/domain scores and global grades acting in combination. This complexity requires a…
Descriptors: Examiners, Experimenter Characteristics, Cutting Scores, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Timothy J.; Pugh, Debra; Touchie, Claire; Chan, James; Humphrey-Murto, Susan – Advances in Health Sciences Education, 2018
There is an increasing focus on factors that influence the variability of rater-based judgments. First impressions are one such factor. First impressions are judgments about people that are made quickly and are based on little information. Under some circumstances, these judgments can be predictive of subsequent decisions. A concern for both…
Descriptors: Rating Scales, Attitudes, Physicians, Examiners
Peer reviewed Peer reviewed
Direct linkDirect link
Sebok, Stefanie S.; Roy, Marguerite; Klinger, Don A.; De Champlain, André F. – Advances in Health Sciences Education, 2015
Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering…
Descriptors: Performance Based Assessment, Site Selection, Medical Students, Physicians
Peer reviewed Peer reviewed
Direct linkDirect link
Ruesseler, Miriam; Weinlich, Michael; Byhahn, Christian; Muller, Michael P.; Junger, Jana; Marzi, Ingo; Walcher, Felix – Advances in Health Sciences Education, 2010
In case of an emergency, a fast and structured patient management is crucial for patient's outcome. The competencies needed should be acquired and assessed during medical education. The objective structured clinical examination (OSCE) is a valid and reliable assessment format to evaluate practical skills. However, traditional OSCE stations examine…
Descriptors: Medical Services, Medical Education, Validity, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Jefferies, Ann; Simmons, Brian; Ng, Eugene; Skidmore, Martin – Advances in Health Sciences Education, 2011
Competency based medical education involves assessing physicians-in-training in multiple roles. Training programs are challenged by the need to introduce appropriate yet feasible assessment methods. We therefore examined the utility of a structured oral examination (SOE) in the assessment of the 7 CanMEDS roles (Medical Expert, Communicator,…
Descriptors: Medical Education, Competence, Medical Students, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Harasym, Peter H.; Woloschuk, Wayne; Cunning, Leslie – Advances in Health Sciences Education, 2008
Physician-patient communication is a clinical skill that can be learned and has a positive impact on patient satisfaction and health outcomes. A concerted effort at all medical schools is now directed at teaching and evaluating this core skill. Student communication skills are often assessed by an Objective Structure Clinical Examination (OSCE).…
Descriptors: Medical Schools, Family Practice (Medicine), Examiners, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Wallach, P. M.; Crespo, L. M.; Holtzman, K. Z.; Galbraith, R. M.; Swanson, D. B. – Advances in Health Sciences Education, 2006
Purpose: In conjunction with curricular changes, a process to develop integrated examinations was implemented. Pre-established guidelines were provided favoring vignettes, clinically relevant material, and application of knowledge rather than simple recall. Questions were read aloud in a committee including all course directors, and a reviewer…
Descriptors: Test Items, Rating Scales, Examiners, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Timothy J.; Humphrey-Murto, Susan M.; Norman, Geoffrey R. – Advances in Health Sciences Education, 2006
When setting standards, administrators of small-scale OSCEs often face several challenges, including a lack of resources, a lack of available expertise in statistics, and difficulty in recruiting judges. The Modified Borderline-Group Method is a standard setting procedure that compensates for these challenges by using physician examiners and is…
Descriptors: Intervals, Standard Setting (Scoring), Measures (Individuals), Examiners
Peer reviewed Peer reviewed
Direct linkDirect link
Basco, William T., Jr.; Lancaster, Carol J.; Gilbert, Gregory E.; Carey, Maura E.; Blue, Amy V. – Advances in Health Sciences Education, 2008
Background and purpose: Data supporting the predictive validity of the medical school admission interview are mixed. This study tested the hypothesis that the admission interview is predictive of interpersonal interactions between medical students and standardized patients. Method: We determined correlations between admission interview scores and…
Descriptors: Check Lists, Medical Education, Medical Students, Medical Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Reiter, Harold I.; Rosenfeld, Jack; Nandagopal, Kiruthiga; Eva, Kevin W. – Advances in Health Sciences Education, 2004
Context: Various research studies have examined the question of whether expert or non-expert raters, faculty or students, evaluators or standardized patients, give more reliable and valid summative assessments of performance on Objective Structured Clinical Examinations (OSCEs). Less studied has been the question of whether or not non-faculty…
Descriptors: Evidence, Video Technology, Feedback (Response), Evaluators