Evaluating robotic-assisted surgery training videos with multi-task convolutional neural networks

Yihao Wang, Jessica Dai, Tara N. Morgan, Mohamed Elsaied, Alaina Garbens, Xingming Qu, Ryan Steinberg, Jeffrey Gahan, Eric C. Larson

Research output: Contribution to journalArticlepeer-review

Abstract

We seek to understand if an automated algorithm can replace human scoring of surgical trainees performing the urethrovesical anastomosis in radical prostatectomy with synthetic tissue. Specifically, we investigate neural networks for predicting the surgical proficiency score (GEARS score) from video clips. We evaluate videos of surgeons performing the urethral anastomosis using synthetic tissue. The algorithm tracks surgical instrument locations from video, saving the positions of key points on the instruments over time. These positional features are used to train a multi-task convolutional network to infer each sub-category of the GEARS score to determine the proficiency level of trainees. Experimental results demonstrate that the proposed method achieves good performance with scores matching manual inspection in 86.1% of all GEARS sub-categories. Furthermore, the model can detect the difference between proficiency (novice to expert) in 83.3% of videos. Evaluation of GEARS sub-categories with artificial neural networks is possible for novice and intermediate surgeons, but additional research is needed to understand if expert surgeons can be evaluated with a similar automated system.

Original languageEnglish (US)
JournalJournal of Robotic Surgery
DOIs
StateAccepted/In press - 2021

Keywords

  • Deep learning
  • Keypoint detection
  • Robotic-assisted surgery
  • Skill evaluation
  • Surgical training

ASJC Scopus subject areas

  • Surgery
  • Health Informatics

Fingerprint

Dive into the research topics of 'Evaluating robotic-assisted surgery training videos with multi-task convolutional neural networks'. Together they form a unique fingerprint.

Cite this