Automatic Surgical Skill Rating Using Stylistic Behavior Components

Robert V Rege, Robert Rege, Ann Majewicz Fey

Research output: Contribution to journalArticle

Abstract

A gold standard in surgical skill rating and evaluation is direct observation, which a group of experts rate trainees based on a likert scale, by observing their performance during a surgical task. This method is time and resource intensive. To alleviate this burden, many studies have focused on automatic surgical skill assessment; however, the metrics suggested by the literature for automatic evaluation do not capture the stylistic behavior of the user. In addition very few studies focus on automatic rating of surgical skills based on available likert scales. In a previous study we presented a stylistic behavior lexicon for surgical skill. In this study we evaluate the lexicon's ability to automatically rate robotic surgical skill, based on the 6 domains in the Global Evaluative Assessment of Robotic Skills (GEARS). 14 subjects of different skill levels performed two surgical tasks on da Vinci surgical simulator. Different measurements were acquired as subjects performed the tasks, including limb (hand and arm) kinematics and joint (shoulder, elbow, wrist) positions. Posture videos of the subjects performing the task, as well as videos of the task being performed were viewed and rated by faculty experts based on the 6 domains in GEARS. The paired videos were also rated via crowd-sourcing based on our stylistic behavior lexicon. Two separate regression learner models, one using the sensor measurements and the other using crowd ratings for our proposed lexicon, were trained for each domain in GEARS. The results indicate that the scores predicted from both prediction models are in agreement with the gold standard faculty ratings.

Fingerprint

Robotics
Crowdsourcing
Elbow Joint
Shoulder Joint
Wrist
Posture
Biomechanical Phenomena
Kinematics
Arm
Extremities
Hand
Simulators
Observation
Sensors

ASJC Scopus subject areas

  • Signal Processing
  • Biomedical Engineering
  • Computer Vision and Pattern Recognition
  • Health Informatics

Cite this

@article{ce1897a93ef340d1991b0e27d0e7bbc8,
title = "Automatic Surgical Skill Rating Using Stylistic Behavior Components",
abstract = "A gold standard in surgical skill rating and evaluation is direct observation, which a group of experts rate trainees based on a likert scale, by observing their performance during a surgical task. This method is time and resource intensive. To alleviate this burden, many studies have focused on automatic surgical skill assessment; however, the metrics suggested by the literature for automatic evaluation do not capture the stylistic behavior of the user. In addition very few studies focus on automatic rating of surgical skills based on available likert scales. In a previous study we presented a stylistic behavior lexicon for surgical skill. In this study we evaluate the lexicon's ability to automatically rate robotic surgical skill, based on the 6 domains in the Global Evaluative Assessment of Robotic Skills (GEARS). 14 subjects of different skill levels performed two surgical tasks on da Vinci surgical simulator. Different measurements were acquired as subjects performed the tasks, including limb (hand and arm) kinematics and joint (shoulder, elbow, wrist) positions. Posture videos of the subjects performing the task, as well as videos of the task being performed were viewed and rated by faculty experts based on the 6 domains in GEARS. The paired videos were also rated via crowd-sourcing based on our stylistic behavior lexicon. Two separate regression learner models, one using the sensor measurements and the other using crowd ratings for our proposed lexicon, were trained for each domain in GEARS. The results indicate that the scores predicted from both prediction models are in agreement with the gold standard faculty ratings.",
author = "Rege, {Robert V} and Robert Rege and Fey, {Ann Majewicz}",
year = "2018",
month = "7",
day = "1",
doi = "10.1109/EMBC.2018.8512593",
language = "English (US)",
volume = "2018",
pages = "1829--1832",
journal = "Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference",
issn = "1557-170X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Automatic Surgical Skill Rating Using Stylistic Behavior Components

AU - Rege, Robert V

AU - Rege, Robert

AU - Fey, Ann Majewicz

PY - 2018/7/1

Y1 - 2018/7/1

N2 - A gold standard in surgical skill rating and evaluation is direct observation, which a group of experts rate trainees based on a likert scale, by observing their performance during a surgical task. This method is time and resource intensive. To alleviate this burden, many studies have focused on automatic surgical skill assessment; however, the metrics suggested by the literature for automatic evaluation do not capture the stylistic behavior of the user. In addition very few studies focus on automatic rating of surgical skills based on available likert scales. In a previous study we presented a stylistic behavior lexicon for surgical skill. In this study we evaluate the lexicon's ability to automatically rate robotic surgical skill, based on the 6 domains in the Global Evaluative Assessment of Robotic Skills (GEARS). 14 subjects of different skill levels performed two surgical tasks on da Vinci surgical simulator. Different measurements were acquired as subjects performed the tasks, including limb (hand and arm) kinematics and joint (shoulder, elbow, wrist) positions. Posture videos of the subjects performing the task, as well as videos of the task being performed were viewed and rated by faculty experts based on the 6 domains in GEARS. The paired videos were also rated via crowd-sourcing based on our stylistic behavior lexicon. Two separate regression learner models, one using the sensor measurements and the other using crowd ratings for our proposed lexicon, were trained for each domain in GEARS. The results indicate that the scores predicted from both prediction models are in agreement with the gold standard faculty ratings.

AB - A gold standard in surgical skill rating and evaluation is direct observation, which a group of experts rate trainees based on a likert scale, by observing their performance during a surgical task. This method is time and resource intensive. To alleviate this burden, many studies have focused on automatic surgical skill assessment; however, the metrics suggested by the literature for automatic evaluation do not capture the stylistic behavior of the user. In addition very few studies focus on automatic rating of surgical skills based on available likert scales. In a previous study we presented a stylistic behavior lexicon for surgical skill. In this study we evaluate the lexicon's ability to automatically rate robotic surgical skill, based on the 6 domains in the Global Evaluative Assessment of Robotic Skills (GEARS). 14 subjects of different skill levels performed two surgical tasks on da Vinci surgical simulator. Different measurements were acquired as subjects performed the tasks, including limb (hand and arm) kinematics and joint (shoulder, elbow, wrist) positions. Posture videos of the subjects performing the task, as well as videos of the task being performed were viewed and rated by faculty experts based on the 6 domains in GEARS. The paired videos were also rated via crowd-sourcing based on our stylistic behavior lexicon. Two separate regression learner models, one using the sensor measurements and the other using crowd ratings for our proposed lexicon, were trained for each domain in GEARS. The results indicate that the scores predicted from both prediction models are in agreement with the gold standard faculty ratings.

UR - http://www.scopus.com/inward/record.url?scp=85056645702&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056645702&partnerID=8YFLogxK

U2 - 10.1109/EMBC.2018.8512593

DO - 10.1109/EMBC.2018.8512593

M3 - Article

C2 - 30440751

AN - SCOPUS:85056645702

VL - 2018

SP - 1829

EP - 1832

JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference

JF - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference

SN - 1557-170X

ER -