Measuring agreement in medical informatics reliability studies

George Hripcsak, Daniel F. Heitjan

Research output: Contribution to journalArticle

137 Citations (Scopus)

Abstract

Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on the tasks and categories in the experiment. It is helpful to separate the components of disagreement when the goal is to improve the reliability of an instrument or of the raters. Approaches based on modeling the decision making process can be helpful here, including tetrachoric correlation, polychoric correlation, latent trait models, and latent class models. Decision making models can also be used to better understand the behavior of different agreement metrics. For example, if the observed prevalence of responses in one of two available categories is low, then there is insufficient information in the sample to judge raters' ability to discriminate cases, and kappa may underestimate the true agreement and observed agreement may overestimate it.

Original languageEnglish (US)
Pages (from-to)99-110
Number of pages12
JournalJournal of Biomedical Informatics
Volume35
Issue number2
DOIs
StatePublished - Nov 28 2002

Fingerprint

Medical Informatics
Decision Making
Aptitude
Decision making
Statistics
Experiments

Keywords

  • Agreement
  • Kappa
  • Latent structure analysis
  • Prevalence
  • Reliability
  • Tetrachoric correlation

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics
  • Computer Science (miscellaneous)
  • Catalysis

Cite this

Measuring agreement in medical informatics reliability studies. / Hripcsak, George; Heitjan, Daniel F.

In: Journal of Biomedical Informatics, Vol. 35, No. 2, 28.11.2002, p. 99-110.

Research output: Contribution to journalArticle

@article{aca4501e1892402d97efa3299cca5654,
title = "Measuring agreement in medical informatics reliability studies",
abstract = "Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on the tasks and categories in the experiment. It is helpful to separate the components of disagreement when the goal is to improve the reliability of an instrument or of the raters. Approaches based on modeling the decision making process can be helpful here, including tetrachoric correlation, polychoric correlation, latent trait models, and latent class models. Decision making models can also be used to better understand the behavior of different agreement metrics. For example, if the observed prevalence of responses in one of two available categories is low, then there is insufficient information in the sample to judge raters' ability to discriminate cases, and kappa may underestimate the true agreement and observed agreement may overestimate it.",
keywords = "Agreement, Kappa, Latent structure analysis, Prevalence, Reliability, Tetrachoric correlation",
author = "George Hripcsak and Heitjan, {Daniel F.}",
year = "2002",
month = "11",
day = "28",
doi = "10.1016/S1532-0464(02)00500-2",
language = "English (US)",
volume = "35",
pages = "99--110",
journal = "Journal of Biomedical Informatics",
issn = "1532-0464",
publisher = "Academic Press Inc.",
number = "2",

}

TY - JOUR

T1 - Measuring agreement in medical informatics reliability studies

AU - Hripcsak, George

AU - Heitjan, Daniel F.

PY - 2002/11/28

Y1 - 2002/11/28

N2 - Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on the tasks and categories in the experiment. It is helpful to separate the components of disagreement when the goal is to improve the reliability of an instrument or of the raters. Approaches based on modeling the decision making process can be helpful here, including tetrachoric correlation, polychoric correlation, latent trait models, and latent class models. Decision making models can also be used to better understand the behavior of different agreement metrics. For example, if the observed prevalence of responses in one of two available categories is low, then there is insufficient information in the sample to judge raters' ability to discriminate cases, and kappa may underestimate the true agreement and observed agreement may overestimate it.

AB - Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on the tasks and categories in the experiment. It is helpful to separate the components of disagreement when the goal is to improve the reliability of an instrument or of the raters. Approaches based on modeling the decision making process can be helpful here, including tetrachoric correlation, polychoric correlation, latent trait models, and latent class models. Decision making models can also be used to better understand the behavior of different agreement metrics. For example, if the observed prevalence of responses in one of two available categories is low, then there is insufficient information in the sample to judge raters' ability to discriminate cases, and kappa may underestimate the true agreement and observed agreement may overestimate it.

KW - Agreement

KW - Kappa

KW - Latent structure analysis

KW - Prevalence

KW - Reliability

KW - Tetrachoric correlation

UR - http://www.scopus.com/inward/record.url?scp=0036433101&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0036433101&partnerID=8YFLogxK

U2 - 10.1016/S1532-0464(02)00500-2

DO - 10.1016/S1532-0464(02)00500-2

M3 - Article

C2 - 12474424

AN - SCOPUS:0036433101

VL - 35

SP - 99

EP - 110

JO - Journal of Biomedical Informatics

JF - Journal of Biomedical Informatics

SN - 1532-0464

IS - 2

ER -