Interobserver variability among faculty in evaluations of residents' clinical skills

Joseph LaMantia, William Rennie, Donald A. Risucci, Rita Cydulka, Linda Spillane, Louis Graff, John Becher, Kurt Kleinschmidt

Research output: Contribution to journalArticle

13 Citations (Scopus)

Abstract

Objective: To describe interobserver variability among emergency medicine (EM) faculty when using global assessment (GA) rating scales and performance-based criterion (PBC) checklists to evaluate EM residents' clinical skills during standardized patient (SP) encounters. Methods: Six EM residents were videotaped during encounters with SPs and subsequently evaluated by 38 EM faculty at four EM residency sites. There were two encounters in which a single SP presented with headache, two in which a second SP presented with chest pain, and two in which a third SP presented with abdominal pain, resulting in two parallel sets of three. Faculty used GA rating scales to evaluate history taking, physical examination, and interpersonal skills for the initial set of three cases. Each encounter in the second set was evaluated with complaint-specific PBC checklists developed by SAEM's National Consensus Group on Clinical Skills Task Force. Results: Standard deviations, computed for each score distribution, were generally similar across evaluation methods. None of the distributions deviated significantly from that of a Gaussian distribution, as indicated by the Kolmogorov-Smirnov goodness-of-fit test. On PBC checklists, 80% agreement among faculty observers was found for 74% of chest pain, 45% of headache, and 30% of abdominal pain items. Conclusions: When EM faculty evaluate clinical performance of EM residents during videotaped SP encounters, interobserver variabilities are similar, whether a PBC checklist or a GA rating scale is used.

Original languageEnglish (US)
Pages (from-to)38-44
Number of pages7
JournalAcademic Emergency Medicine
Volume6
Issue number1
StatePublished - Jan 1999

Fingerprint

Clinical Competence
Observer Variation
Emergency Medicine
Checklist
Chest Pain
Abdominal Pain
Headache
Normal Distribution
Advisory Committees
Internship and Residency
Physical Examination
Consensus

Keywords

  • Clinical skills
  • Emergency medicine
  • Faculty
  • Interobserver variability
  • Performance assessment
  • Rating scales
  • Residents
  • Standardized patients

ASJC Scopus subject areas

  • Emergency Medicine

Cite this

LaMantia, J., Rennie, W., Risucci, D. A., Cydulka, R., Spillane, L., Graff, L., ... Kleinschmidt, K. (1999). Interobserver variability among faculty in evaluations of residents' clinical skills. Academic Emergency Medicine, 6(1), 38-44.

Interobserver variability among faculty in evaluations of residents' clinical skills. / LaMantia, Joseph; Rennie, William; Risucci, Donald A.; Cydulka, Rita; Spillane, Linda; Graff, Louis; Becher, John; Kleinschmidt, Kurt.

In: Academic Emergency Medicine, Vol. 6, No. 1, 01.1999, p. 38-44.

Research output: Contribution to journalArticle

LaMantia, J, Rennie, W, Risucci, DA, Cydulka, R, Spillane, L, Graff, L, Becher, J & Kleinschmidt, K 1999, 'Interobserver variability among faculty in evaluations of residents' clinical skills', Academic Emergency Medicine, vol. 6, no. 1, pp. 38-44.
LaMantia J, Rennie W, Risucci DA, Cydulka R, Spillane L, Graff L et al. Interobserver variability among faculty in evaluations of residents' clinical skills. Academic Emergency Medicine. 1999 Jan;6(1):38-44.
LaMantia, Joseph ; Rennie, William ; Risucci, Donald A. ; Cydulka, Rita ; Spillane, Linda ; Graff, Louis ; Becher, John ; Kleinschmidt, Kurt. / Interobserver variability among faculty in evaluations of residents' clinical skills. In: Academic Emergency Medicine. 1999 ; Vol. 6, No. 1. pp. 38-44.
@article{c9efa89c40d2450fab9c7d2d8b9bc4f5,
title = "Interobserver variability among faculty in evaluations of residents' clinical skills",
abstract = "Objective: To describe interobserver variability among emergency medicine (EM) faculty when using global assessment (GA) rating scales and performance-based criterion (PBC) checklists to evaluate EM residents' clinical skills during standardized patient (SP) encounters. Methods: Six EM residents were videotaped during encounters with SPs and subsequently evaluated by 38 EM faculty at four EM residency sites. There were two encounters in which a single SP presented with headache, two in which a second SP presented with chest pain, and two in which a third SP presented with abdominal pain, resulting in two parallel sets of three. Faculty used GA rating scales to evaluate history taking, physical examination, and interpersonal skills for the initial set of three cases. Each encounter in the second set was evaluated with complaint-specific PBC checklists developed by SAEM's National Consensus Group on Clinical Skills Task Force. Results: Standard deviations, computed for each score distribution, were generally similar across evaluation methods. None of the distributions deviated significantly from that of a Gaussian distribution, as indicated by the Kolmogorov-Smirnov goodness-of-fit test. On PBC checklists, 80{\%} agreement among faculty observers was found for 74{\%} of chest pain, 45{\%} of headache, and 30{\%} of abdominal pain items. Conclusions: When EM faculty evaluate clinical performance of EM residents during videotaped SP encounters, interobserver variabilities are similar, whether a PBC checklist or a GA rating scale is used.",
keywords = "Clinical skills, Emergency medicine, Faculty, Interobserver variability, Performance assessment, Rating scales, Residents, Standardized patients",
author = "Joseph LaMantia and William Rennie and Risucci, {Donald A.} and Rita Cydulka and Linda Spillane and Louis Graff and John Becher and Kurt Kleinschmidt",
year = "1999",
month = "1",
language = "English (US)",
volume = "6",
pages = "38--44",
journal = "Academic Emergency Medicine",
issn = "1069-6563",
publisher = "Wiley-Blackwell",
number = "1",

}

TY - JOUR

T1 - Interobserver variability among faculty in evaluations of residents' clinical skills

AU - LaMantia, Joseph

AU - Rennie, William

AU - Risucci, Donald A.

AU - Cydulka, Rita

AU - Spillane, Linda

AU - Graff, Louis

AU - Becher, John

AU - Kleinschmidt, Kurt

PY - 1999/1

Y1 - 1999/1

N2 - Objective: To describe interobserver variability among emergency medicine (EM) faculty when using global assessment (GA) rating scales and performance-based criterion (PBC) checklists to evaluate EM residents' clinical skills during standardized patient (SP) encounters. Methods: Six EM residents were videotaped during encounters with SPs and subsequently evaluated by 38 EM faculty at four EM residency sites. There were two encounters in which a single SP presented with headache, two in which a second SP presented with chest pain, and two in which a third SP presented with abdominal pain, resulting in two parallel sets of three. Faculty used GA rating scales to evaluate history taking, physical examination, and interpersonal skills for the initial set of three cases. Each encounter in the second set was evaluated with complaint-specific PBC checklists developed by SAEM's National Consensus Group on Clinical Skills Task Force. Results: Standard deviations, computed for each score distribution, were generally similar across evaluation methods. None of the distributions deviated significantly from that of a Gaussian distribution, as indicated by the Kolmogorov-Smirnov goodness-of-fit test. On PBC checklists, 80% agreement among faculty observers was found for 74% of chest pain, 45% of headache, and 30% of abdominal pain items. Conclusions: When EM faculty evaluate clinical performance of EM residents during videotaped SP encounters, interobserver variabilities are similar, whether a PBC checklist or a GA rating scale is used.

AB - Objective: To describe interobserver variability among emergency medicine (EM) faculty when using global assessment (GA) rating scales and performance-based criterion (PBC) checklists to evaluate EM residents' clinical skills during standardized patient (SP) encounters. Methods: Six EM residents were videotaped during encounters with SPs and subsequently evaluated by 38 EM faculty at four EM residency sites. There were two encounters in which a single SP presented with headache, two in which a second SP presented with chest pain, and two in which a third SP presented with abdominal pain, resulting in two parallel sets of three. Faculty used GA rating scales to evaluate history taking, physical examination, and interpersonal skills for the initial set of three cases. Each encounter in the second set was evaluated with complaint-specific PBC checklists developed by SAEM's National Consensus Group on Clinical Skills Task Force. Results: Standard deviations, computed for each score distribution, were generally similar across evaluation methods. None of the distributions deviated significantly from that of a Gaussian distribution, as indicated by the Kolmogorov-Smirnov goodness-of-fit test. On PBC checklists, 80% agreement among faculty observers was found for 74% of chest pain, 45% of headache, and 30% of abdominal pain items. Conclusions: When EM faculty evaluate clinical performance of EM residents during videotaped SP encounters, interobserver variabilities are similar, whether a PBC checklist or a GA rating scale is used.

KW - Clinical skills

KW - Emergency medicine

KW - Faculty

KW - Interobserver variability

KW - Performance assessment

KW - Rating scales

KW - Residents

KW - Standardized patients

UR - http://www.scopus.com/inward/record.url?scp=0032893282&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032893282&partnerID=8YFLogxK

M3 - Article

C2 - 9928975

AN - SCOPUS:0032893282

VL - 6

SP - 38

EP - 44

JO - Academic Emergency Medicine

JF - Academic Emergency Medicine

SN - 1069-6563

IS - 1

ER -