Interrater reliability of the bedside shivering assessment scale

DaiWai M. Olson, Jana L. Grissom, Rachel A. Williamson, Stacey N. Bennett, Steven T. Bellows, Michael L. James

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Background: Since its early development, the Bedside Shivering Assessment Scale (BSAS) has had only initial psychometric testing. Before this instrument is incorporated into routine practice, its interrater reliability should be explored in a diverse group of practitioners. Methods: This prospective nonrandomized study used a panel of 5 observers who completed 100 paired assessments. Observers independently scored patients for shivering by using the BSAS. Kappa statistics were determined by using SAS version 9.4 with BSAS scores treated as ordinal data. Results: A weighted kappa value of 0.48 from 100 paired observations of 22 patients indicates moderate agreement of the BSAS scores. Most of the BSAS scores were 0 or 1; dichotomizing shivering as little or no shivering versus significant shivering resulted in a kappa of 0.66 (substantial agreement). No relationship was found between timing of assess ment or the role of the practitioner and the likelihood of both observers assigning the same BSAS score. Conclusion: The BSAS has adequate interrater reliability to be considered for use among a diverse group of practitioners.

Original languageEnglish (US)
Pages (from-to)70-74
Number of pages5
JournalAmerican Journal of Critical Care
Volume22
Issue number1
DOIs
StatePublished - 2013

Fingerprint

Shivering
Psychometrics

ASJC Scopus subject areas

  • Critical Care

Cite this

Olson, D. M., Grissom, J. L., Williamson, R. A., Bennett, S. N., Bellows, S. T., & James, M. L. (2013). Interrater reliability of the bedside shivering assessment scale. American Journal of Critical Care, 22(1), 70-74. https://doi.org/10.4037/ajcc2013907

Interrater reliability of the bedside shivering assessment scale. / Olson, DaiWai M.; Grissom, Jana L.; Williamson, Rachel A.; Bennett, Stacey N.; Bellows, Steven T.; James, Michael L.

In: American Journal of Critical Care, Vol. 22, No. 1, 2013, p. 70-74.

Research output: Contribution to journalArticle

Olson, DM, Grissom, JL, Williamson, RA, Bennett, SN, Bellows, ST & James, ML 2013, 'Interrater reliability of the bedside shivering assessment scale', American Journal of Critical Care, vol. 22, no. 1, pp. 70-74. https://doi.org/10.4037/ajcc2013907
Olson, DaiWai M. ; Grissom, Jana L. ; Williamson, Rachel A. ; Bennett, Stacey N. ; Bellows, Steven T. ; James, Michael L. / Interrater reliability of the bedside shivering assessment scale. In: American Journal of Critical Care. 2013 ; Vol. 22, No. 1. pp. 70-74.
@article{e3273877c8464fcea3247281bc8c0712,
title = "Interrater reliability of the bedside shivering assessment scale",
abstract = "Background: Since its early development, the Bedside Shivering Assessment Scale (BSAS) has had only initial psychometric testing. Before this instrument is incorporated into routine practice, its interrater reliability should be explored in a diverse group of practitioners. Methods: This prospective nonrandomized study used a panel of 5 observers who completed 100 paired assessments. Observers independently scored patients for shivering by using the BSAS. Kappa statistics were determined by using SAS version 9.4 with BSAS scores treated as ordinal data. Results: A weighted kappa value of 0.48 from 100 paired observations of 22 patients indicates moderate agreement of the BSAS scores. Most of the BSAS scores were 0 or 1; dichotomizing shivering as little or no shivering versus significant shivering resulted in a kappa of 0.66 (substantial agreement). No relationship was found between timing of assess ment or the role of the practitioner and the likelihood of both observers assigning the same BSAS score. Conclusion: The BSAS has adequate interrater reliability to be considered for use among a diverse group of practitioners.",
author = "Olson, {DaiWai M.} and Grissom, {Jana L.} and Williamson, {Rachel A.} and Bennett, {Stacey N.} and Bellows, {Steven T.} and James, {Michael L.}",
year = "2013",
doi = "10.4037/ajcc2013907",
language = "English (US)",
volume = "22",
pages = "70--74",
journal = "American Journal of Critical Care",
issn = "1062-3264",
publisher = "American Association of Critical Care Nurses",
number = "1",

}

TY - JOUR

T1 - Interrater reliability of the bedside shivering assessment scale

AU - Olson, DaiWai M.

AU - Grissom, Jana L.

AU - Williamson, Rachel A.

AU - Bennett, Stacey N.

AU - Bellows, Steven T.

AU - James, Michael L.

PY - 2013

Y1 - 2013

N2 - Background: Since its early development, the Bedside Shivering Assessment Scale (BSAS) has had only initial psychometric testing. Before this instrument is incorporated into routine practice, its interrater reliability should be explored in a diverse group of practitioners. Methods: This prospective nonrandomized study used a panel of 5 observers who completed 100 paired assessments. Observers independently scored patients for shivering by using the BSAS. Kappa statistics were determined by using SAS version 9.4 with BSAS scores treated as ordinal data. Results: A weighted kappa value of 0.48 from 100 paired observations of 22 patients indicates moderate agreement of the BSAS scores. Most of the BSAS scores were 0 or 1; dichotomizing shivering as little or no shivering versus significant shivering resulted in a kappa of 0.66 (substantial agreement). No relationship was found between timing of assess ment or the role of the practitioner and the likelihood of both observers assigning the same BSAS score. Conclusion: The BSAS has adequate interrater reliability to be considered for use among a diverse group of practitioners.

AB - Background: Since its early development, the Bedside Shivering Assessment Scale (BSAS) has had only initial psychometric testing. Before this instrument is incorporated into routine practice, its interrater reliability should be explored in a diverse group of practitioners. Methods: This prospective nonrandomized study used a panel of 5 observers who completed 100 paired assessments. Observers independently scored patients for shivering by using the BSAS. Kappa statistics were determined by using SAS version 9.4 with BSAS scores treated as ordinal data. Results: A weighted kappa value of 0.48 from 100 paired observations of 22 patients indicates moderate agreement of the BSAS scores. Most of the BSAS scores were 0 or 1; dichotomizing shivering as little or no shivering versus significant shivering resulted in a kappa of 0.66 (substantial agreement). No relationship was found between timing of assess ment or the role of the practitioner and the likelihood of both observers assigning the same BSAS score. Conclusion: The BSAS has adequate interrater reliability to be considered for use among a diverse group of practitioners.

UR - http://www.scopus.com/inward/record.url?scp=84873170876&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84873170876&partnerID=8YFLogxK

U2 - 10.4037/ajcc2013907

DO - 10.4037/ajcc2013907

M3 - Article

C2 - 23283091

AN - SCOPUS:84873170876

VL - 22

SP - 70

EP - 74

JO - American Journal of Critical Care

JF - American Journal of Critical Care

SN - 1062-3264

IS - 1

ER -