Technical Note: Deriving ventilation imaging from 4DCT by deep convolutional neural network

Yuncheng Zhong, Yevgeniy Vinogradskiy, Liyuan Chen, Nick Myziuk, Richard Castillo, Edward Castillo, Thomas Guerrero, Steve B Jiang, Jing Wang

Research output: Contribution to journalComment/debate

1 Citation (Scopus)

Abstract

Purpose: Ventilation images can be derived from four-dimensional computed tomography (4DCT) by analyzing the change in HU values and deformable vector fields between different respiration phases of computed tomography (CT). As deformable image registration (DIR) is involved, accuracy of 4DCT-derived ventilation image is sensitive to the choice of DIR algorithms. To overcome the uncertainty associated with DIR, we develop a method based on deep convolutional neural network (CNN) to derive ventilation images directly from the 4DCT without explicit image registration. Methods: A total of 82 sets of 4DCT and ventilation images from patients with lung cancer were used in this study. In the proposed CNN architecture, the CT two-channel input data consist of CT at the end of exhale and the end of inhale phases. The first convolutional layer has 32 different kernels of size 5 × 5 × 5, followed by another eight convolutional layers each of which is equipped with an activation layer (ReLU). The loss function is the mean-squared-error (MSE) to measure the intensity difference between the predicted and reference ventilation images. Results: The predicted images were comparable to the label images of the test data. The similarity index, correlation coefficient, and Gamma index passing rate averaged over the tenfold cross validation were 0.880 ± 0.035, 0.874 ± 0.024, and 0.806 ± 0.014, respectively. Conclusions: The results demonstrate that deep CNN can generate ventilation imaging from 4DCT without explicit deformable image registration, reducing the associated uncertainty.

Original languageEnglish (US)
JournalMedical physics
DOIs
StatePublished - Jan 1 2019

Fingerprint

Ventilation
Tomography
Uncertainty
Four-Dimensional Computed Tomography
Lung Neoplasms
Respiration

Keywords

  • 4DCT lung ventilation imaging
  • convolutional neural network
  • lung functional imaging

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Cite this

Technical Note : Deriving ventilation imaging from 4DCT by deep convolutional neural network. / Zhong, Yuncheng; Vinogradskiy, Yevgeniy; Chen, Liyuan; Myziuk, Nick; Castillo, Richard; Castillo, Edward; Guerrero, Thomas; Jiang, Steve B; Wang, Jing.

In: Medical physics, 01.01.2019.

Research output: Contribution to journalComment/debate

Zhong, Yuncheng ; Vinogradskiy, Yevgeniy ; Chen, Liyuan ; Myziuk, Nick ; Castillo, Richard ; Castillo, Edward ; Guerrero, Thomas ; Jiang, Steve B ; Wang, Jing. / Technical Note : Deriving ventilation imaging from 4DCT by deep convolutional neural network. In: Medical physics. 2019.
@article{528d71b896e6410890b0bc14b8dc1700,
title = "Technical Note: Deriving ventilation imaging from 4DCT by deep convolutional neural network",
abstract = "Purpose: Ventilation images can be derived from four-dimensional computed tomography (4DCT) by analyzing the change in HU values and deformable vector fields between different respiration phases of computed tomography (CT). As deformable image registration (DIR) is involved, accuracy of 4DCT-derived ventilation image is sensitive to the choice of DIR algorithms. To overcome the uncertainty associated with DIR, we develop a method based on deep convolutional neural network (CNN) to derive ventilation images directly from the 4DCT without explicit image registration. Methods: A total of 82 sets of 4DCT and ventilation images from patients with lung cancer were used in this study. In the proposed CNN architecture, the CT two-channel input data consist of CT at the end of exhale and the end of inhale phases. The first convolutional layer has 32 different kernels of size 5 × 5 × 5, followed by another eight convolutional layers each of which is equipped with an activation layer (ReLU). The loss function is the mean-squared-error (MSE) to measure the intensity difference between the predicted and reference ventilation images. Results: The predicted images were comparable to the label images of the test data. The similarity index, correlation coefficient, and Gamma index passing rate averaged over the tenfold cross validation were 0.880 ± 0.035, 0.874 ± 0.024, and 0.806 ± 0.014, respectively. Conclusions: The results demonstrate that deep CNN can generate ventilation imaging from 4DCT without explicit deformable image registration, reducing the associated uncertainty.",
keywords = "4DCT lung ventilation imaging, convolutional neural network, lung functional imaging",
author = "Yuncheng Zhong and Yevgeniy Vinogradskiy and Liyuan Chen and Nick Myziuk and Richard Castillo and Edward Castillo and Thomas Guerrero and Jiang, {Steve B} and Jing Wang",
year = "2019",
month = "1",
day = "1",
doi = "10.1002/mp.13421",
language = "English (US)",
journal = "Medical Physics",
issn = "0094-2405",
publisher = "AAPM - American Association of Physicists in Medicine",

}

TY - JOUR

T1 - Technical Note

T2 - Deriving ventilation imaging from 4DCT by deep convolutional neural network

AU - Zhong, Yuncheng

AU - Vinogradskiy, Yevgeniy

AU - Chen, Liyuan

AU - Myziuk, Nick

AU - Castillo, Richard

AU - Castillo, Edward

AU - Guerrero, Thomas

AU - Jiang, Steve B

AU - Wang, Jing

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Purpose: Ventilation images can be derived from four-dimensional computed tomography (4DCT) by analyzing the change in HU values and deformable vector fields between different respiration phases of computed tomography (CT). As deformable image registration (DIR) is involved, accuracy of 4DCT-derived ventilation image is sensitive to the choice of DIR algorithms. To overcome the uncertainty associated with DIR, we develop a method based on deep convolutional neural network (CNN) to derive ventilation images directly from the 4DCT without explicit image registration. Methods: A total of 82 sets of 4DCT and ventilation images from patients with lung cancer were used in this study. In the proposed CNN architecture, the CT two-channel input data consist of CT at the end of exhale and the end of inhale phases. The first convolutional layer has 32 different kernels of size 5 × 5 × 5, followed by another eight convolutional layers each of which is equipped with an activation layer (ReLU). The loss function is the mean-squared-error (MSE) to measure the intensity difference between the predicted and reference ventilation images. Results: The predicted images were comparable to the label images of the test data. The similarity index, correlation coefficient, and Gamma index passing rate averaged over the tenfold cross validation were 0.880 ± 0.035, 0.874 ± 0.024, and 0.806 ± 0.014, respectively. Conclusions: The results demonstrate that deep CNN can generate ventilation imaging from 4DCT without explicit deformable image registration, reducing the associated uncertainty.

AB - Purpose: Ventilation images can be derived from four-dimensional computed tomography (4DCT) by analyzing the change in HU values and deformable vector fields between different respiration phases of computed tomography (CT). As deformable image registration (DIR) is involved, accuracy of 4DCT-derived ventilation image is sensitive to the choice of DIR algorithms. To overcome the uncertainty associated with DIR, we develop a method based on deep convolutional neural network (CNN) to derive ventilation images directly from the 4DCT without explicit image registration. Methods: A total of 82 sets of 4DCT and ventilation images from patients with lung cancer were used in this study. In the proposed CNN architecture, the CT two-channel input data consist of CT at the end of exhale and the end of inhale phases. The first convolutional layer has 32 different kernels of size 5 × 5 × 5, followed by another eight convolutional layers each of which is equipped with an activation layer (ReLU). The loss function is the mean-squared-error (MSE) to measure the intensity difference between the predicted and reference ventilation images. Results: The predicted images were comparable to the label images of the test data. The similarity index, correlation coefficient, and Gamma index passing rate averaged over the tenfold cross validation were 0.880 ± 0.035, 0.874 ± 0.024, and 0.806 ± 0.014, respectively. Conclusions: The results demonstrate that deep CNN can generate ventilation imaging from 4DCT without explicit deformable image registration, reducing the associated uncertainty.

KW - 4DCT lung ventilation imaging

KW - convolutional neural network

KW - lung functional imaging

UR - http://www.scopus.com/inward/record.url?scp=85062941843&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062941843&partnerID=8YFLogxK

U2 - 10.1002/mp.13421

DO - 10.1002/mp.13421

M3 - Comment/debate

C2 - 30714159

AN - SCOPUS:85062941843

JO - Medical Physics

JF - Medical Physics

SN - 0094-2405

ER -