Saliency-Guided Deep Learning Network for Automatic Target Delineation in Post-Operative Stereotactic Partial Breast Irradiation

M. Kazemimoghadam, W. Chi, Assal S Rahimi, N. Kim, P. G. Alluri, C. R. Nwachukwu, Weiguo Lu, Xuejun Gu

Research output: Contribution to journalArticlepeer-review

Abstract

PURPOSE/OBJECTIVE(S): To accommodate an efficient clinical workflow in partial breast irradiation (PBI), fast, accurate and automated target delineation is desired. In this study, we develop a saliency-based deep learning segmentation (SDL-Seg) algorithm by incorporating prior domain knowledge for automatic gross tumor volume (GTV) delineation in post-op breast irradiation. MATERIALS/METHODS: Our approach incorporates saliency information into a deep learning model U-Net for target delineation. The level of saliency of image regions are described as how likely that region would attract physicians' visual attention. The visual saliency maps were generated using the surgical clip's location identified on CT images. A distance-transformation coupled with a Gaussian filter were then adopted to convert markers' locations to probability maps. The CT images and the corresponding probability maps form a two-channel input for the first convolution layer of the segmentation network. Such design forces the model to encode the location-related features guiding the network to focus on regions with high saliency levels and suppresses low saliency regions. The dataset used for model training, validation and testing (19:5:5) is comprised of 145 prone CT images from 29 post-operative breast cancer patients, who had implanted markers and received 5-fraction PBI regimen on GammaPod. We used the Dice similarity coefficient (DSC), 95 percentile Hausdorff distance (HD95) and average symmetric surface distance (ASD) to assess segmentation results by SDL-Seg and compared it with those generated by basic U-Net. RESULTS: Our model achieves mean (standard deviation) of 76.4% (2.7%), 6.76 (1.83) mm, and 1.9 (0.66) mm for DSC, HD95, and ASD, respectively, outperforming basic U-Net by 13.8%, 1.63 mm, and 0.9 mm. For all 5 testing cases, the saliency-guided U-Net showed increased DSC compared to basic U-Net. Table1, demonstrates the DSC results for 5 test cases using our SDL-Seg and basic U-Net. CONCLUSION: We developed a deep learning model integrating visual saliency information into the network for GTV delineation. Results demonstrate that SDL-Seg outperforms basic U-Net and is a promising approach for efficient and accurate target delineation in PBI. Such real-time delineation tool is highly desired by the on-line treatment planning workflow, such as GammaPod.

Original languageEnglish (US)
Pages (from-to)e112
JournalInternational journal of radiation oncology, biology, physics
Volume111
Issue number3
DOIs
StatePublished - Nov 1 2021

ASJC Scopus subject areas

  • Radiation
  • Oncology
  • Radiology Nuclear Medicine and imaging
  • Cancer Research

Fingerprint

Dive into the research topics of 'Saliency-Guided Deep Learning Network for Automatic Target Delineation in Post-Operative Stereotactic Partial Breast Irradiation'. Together they form a unique fingerprint.

Cite this