Technical note: The effect of image annotation with minimal manual interaction for semiautomatic prostate segmentation in CT images using fully convolutional neural networks

Maysam Shahedi, James D. Dormer, Martin Halicek, Baowei Fei

Research output: Contribution to journalArticlepeer-review

Abstract

Purpose: The goal is to study the performance improvement of a deep learning algorithm in three-dimensional (3D) image segmentation through incorporating minimal user interaction into a fully convolutional neural network (CNN). Methods: A U-Net CNN was trained and tested for 3D prostate segmentation in computed tomography (CT) images. To improve the segmentation accuracy, the CNN's input images were annotated with a set of border landmarks to supervise the network for segmenting the prostate. The network was trained and tested again with annotated images after 5, 10, 15, 20, or 30 landmark points were used. Results: Compared to fully automatic segmentation, the Dice similarity coefficient increased up to 9% when 5–30 sparse landmark points were involved, with the segmentation accuracy improving as more border landmarks were used. Conclusions: When a limited number of sparse border landmarks are used on the input image, the CNN performance approaches the interexpert observer difference observed in manual segmentation.

Original languageEnglish (US)
Pages (from-to)1153-1160
Number of pages8
JournalMedical physics
Volume49
Issue number2
DOIs
StatePublished - Feb 2022

Keywords

  • computed tomography
  • deep learning
  • prostate
  • segmentation
  • user interactions

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Technical note: The effect of image annotation with minimal manual interaction for semiautomatic prostate segmentation in CT images using fully convolutional neural networks'. Together they form a unique fingerprint.

Cite this