Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning

Samaneh Kazemifar, Anjali Balagopal, Dan Nguyen, Sarah McGuire, Raquibul Hannan, Steve Jiang, Amir Owrangi

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3 × 3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2 × 2 transposed convolution and two 3 × 3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.

Original languageEnglish (US)
Article number055003
JournalBiomedical Physics and Engineering Express
Volume4
Issue number5
DOIs
StatePublished - Jul 23 2018

Fingerprint

Organs at Risk
Rectum
Prostate
Urinary Bladder
Observer Variation
Learning
Prostatic Neoplasms
Radiotherapy

Keywords

  • artificial intelligence organ contouring
  • deep machine learning
  • male pelvic region
  • neural network
  • prostate
  • segmentation

ASJC Scopus subject areas

  • Nursing(all)

Cite this

Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. / Kazemifar, Samaneh; Balagopal, Anjali; Nguyen, Dan; McGuire, Sarah; Hannan, Raquibul; Jiang, Steve; Owrangi, Amir.

In: Biomedical Physics and Engineering Express, Vol. 4, No. 5, 055003, 23.07.2018.

Research output: Contribution to journalArticle

@article{7713de88c025486fa7a7b0e9431d2354,
title = "Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning",
abstract = "Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3 × 3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2 × 2 transposed convolution and two 3 × 3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.",
keywords = "artificial intelligence organ contouring, deep machine learning, male pelvic region, neural network, prostate, segmentation",
author = "Samaneh Kazemifar and Anjali Balagopal and Dan Nguyen and Sarah McGuire and Raquibul Hannan and Steve Jiang and Amir Owrangi",
year = "2018",
month = "7",
day = "23",
doi = "10.1088/2057-1976/aad100",
language = "English (US)",
volume = "4",
journal = "Biomedical Physics and Engineering Express",
issn = "2057-1976",
publisher = "IOP Publishing Ltd.",
number = "5",

}

TY - JOUR

T1 - Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning

AU - Kazemifar, Samaneh

AU - Balagopal, Anjali

AU - Nguyen, Dan

AU - McGuire, Sarah

AU - Hannan, Raquibul

AU - Jiang, Steve

AU - Owrangi, Amir

PY - 2018/7/23

Y1 - 2018/7/23

N2 - Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3 × 3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2 × 2 transposed convolution and two 3 × 3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.

AB - Inter-and intra-observer variation in delineating regions of interest (ROIs) occurs because of differences in expertise level and preferences of the radiation oncologists. We evaluated the accuracy of a segmentation model using the U-Net structure to delineate the prostate, bladder, and rectum in male pelvic CT images. The dataset used for training and testing the model consisted of raw CT scan images of 85 prostate cancer patients. We designed a 2D U-Net model to directly learn a mapping function that converts a 2D CT grayscale image to its corresponding 2D OAR segmented image. Our network contains blocks of convolution 2D layers with variable kernel sizes, channel number, and activation functions. On the left side of the U-Net model, we used three 3 × 3 convolutions, each followed by a rectified linear unit (ReLu) (activation function), and one max pooling operation. On the right side of the U-Net model, we used a 2 × 2 transposed convolution and two 3 × 3 convolution networks followed by a ReLu activation function. The automatic segmentation using the U-Net generated an average dice similarity coefficient (DC) and standard deviation (SD) of the following: DC ± SD (0.88 ± 0.12), (0.95 ± 0.04), and (0.92 ± 0.06) for the prostate, bladder, and rectum, respectively. Furthermore, the mean of average surface Hausdorff distance (ASHD) and SD were 1.2 ± 0.9 mm, 1.08 ± 0.8 mm, and 0.8 ± 0.6 mm for the prostate, bladder, and rectum, respectively. Our proposed method, which employs the U-Net structure, is highly accurate and reproducible for automated ROI segmentation. This provides a foundation to improve automatic delineation of the boundaries between the target and surrounding normal soft tissues on a standard radiation therapy planning CT scan.

KW - artificial intelligence organ contouring

KW - deep machine learning

KW - male pelvic region

KW - neural network

KW - prostate

KW - segmentation

UR - http://www.scopus.com/inward/record.url?scp=85053156128&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85053156128&partnerID=8YFLogxK

U2 - 10.1088/2057-1976/aad100

DO - 10.1088/2057-1976/aad100

M3 - Article

AN - SCOPUS:85053156128

VL - 4

JO - Biomedical Physics and Engineering Express

JF - Biomedical Physics and Engineering Express

SN - 2057-1976

IS - 5

M1 - 055003

ER -