A method for volumetric imaging in radiotherapy using single x-ray projection

Yuan Xu, Hao Yan, Luo Ouyang, Jing Wang, Linghong Zhou, Laura Cervino, Steve B. Jiang, Xun Jia

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion was driven by a real patient breathing signal with irregular periods and amplitudes, the average tumor center error was 0.6 mm. The algorithm robustness with respect to sparsity level, patch size, and presence or absence of diaphragm, as well as computation time, has also been studied. Conclusions: The autors have developed a new method that automatically identifies motion information from an x-ray projection, based on which a volumetric image is generated.

Original languageEnglish (US)
Pages (from-to)2498-2509
Number of pages12
JournalMedical Physics
Volume42
Issue number5
DOIs
StatePublished - May 1 2015

Fingerprint

Radiotherapy
X-Rays
Principal Component Analysis
Learning
Neoplasms
Diaphragm
Respiration
Lung

Keywords

  • intensity baseline correction
  • lung motion model
  • sparse learning
  • time-resolved volumetric image
  • tumor motion management

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Cite this

A method for volumetric imaging in radiotherapy using single x-ray projection. / Xu, Yuan; Yan, Hao; Ouyang, Luo; Wang, Jing; Zhou, Linghong; Cervino, Laura; Jiang, Steve B.; Jia, Xun.

In: Medical Physics, Vol. 42, No. 5, 01.05.2015, p. 2498-2509.

Research output: Contribution to journalArticle

Xu, Yuan ; Yan, Hao ; Ouyang, Luo ; Wang, Jing ; Zhou, Linghong ; Cervino, Laura ; Jiang, Steve B. ; Jia, Xun. / A method for volumetric imaging in radiotherapy using single x-ray projection. In: Medical Physics. 2015 ; Vol. 42, No. 5. pp. 2498-2509.
@article{190dcb9bea9c49d78d40eea30b08254f,
title = "A method for volumetric imaging in radiotherapy using single x-ray projection",
abstract = "It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5{\%}, compared to the 10{\%} error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion was driven by a real patient breathing signal with irregular periods and amplitudes, the average tumor center error was 0.6 mm. The algorithm robustness with respect to sparsity level, patch size, and presence or absence of diaphragm, as well as computation time, has also been studied. Conclusions: The autors have developed a new method that automatically identifies motion information from an x-ray projection, based on which a volumetric image is generated.",
keywords = "intensity baseline correction, lung motion model, sparse learning, time-resolved volumetric image, tumor motion management",
author = "Yuan Xu and Hao Yan and Luo Ouyang and Jing Wang and Linghong Zhou and Laura Cervino and Jiang, {Steve B.} and Xun Jia",
year = "2015",
month = "5",
day = "1",
doi = "10.1118/1.4918577",
language = "English (US)",
volume = "42",
pages = "2498--2509",
journal = "Medical Physics",
issn = "0094-2405",
publisher = "AAPM - American Association of Physicists in Medicine",
number = "5",

}

TY - JOUR

T1 - A method for volumetric imaging in radiotherapy using single x-ray projection

AU - Xu, Yuan

AU - Yan, Hao

AU - Ouyang, Luo

AU - Wang, Jing

AU - Zhou, Linghong

AU - Cervino, Laura

AU - Jiang, Steve B.

AU - Jia, Xun

PY - 2015/5/1

Y1 - 2015/5/1

N2 - It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion was driven by a real patient breathing signal with irregular periods and amplitudes, the average tumor center error was 0.6 mm. The algorithm robustness with respect to sparsity level, patch size, and presence or absence of diaphragm, as well as computation time, has also been studied. Conclusions: The autors have developed a new method that automatically identifies motion information from an x-ray projection, based on which a volumetric image is generated.

AB - It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion was driven by a real patient breathing signal with irregular periods and amplitudes, the average tumor center error was 0.6 mm. The algorithm robustness with respect to sparsity level, patch size, and presence or absence of diaphragm, as well as computation time, has also been studied. Conclusions: The autors have developed a new method that automatically identifies motion information from an x-ray projection, based on which a volumetric image is generated.

KW - intensity baseline correction

KW - lung motion model

KW - sparse learning

KW - time-resolved volumetric image

KW - tumor motion management

UR - http://www.scopus.com/inward/record.url?scp=84928552151&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84928552151&partnerID=8YFLogxK

U2 - 10.1118/1.4918577

DO - 10.1118/1.4918577

M3 - Article

VL - 42

SP - 2498

EP - 2509

JO - Medical Physics

JF - Medical Physics

SN - 0094-2405

IS - 5

ER -