Synthesizing CT images from MR images with deep learning: Model generalization for different datasets through transfer learning

Wen Li, Samaneh Kazemifar, Ti Bai, Dan Nguyen, Yaochung Weng, Yafen Li, Jun Xia, Jing Xiong, Yaoqin Xie, Amir Owrangi, Steve Jiang

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Background and purpose. Replacing CT imaging with MR imaging for MR-only radiotherapy has sparked the interest of many scientists and is being increasingly adopted in radiation oncology. Although many studies have focused on generating CT images from MR images, only models on data with the same dataset were tested. Therefore, how well the trained model will work for data from different hospitals and MR protocols is still unknown. In this study, we addressed the model generalization problem for the MR-to-CT conversion task. Materials and methods. Brain T2 MR and corresponding CT images were collected from SZSPH (source domain dataset), brain T1-FLAIR, T1-POST MR, and corresponding CT images were collected from The University of Texas Southwestern (UTSW) (target domain dataset). To investigate the model's generalizability ability, four potential solutions were proposed: source model, target model, combined model, and adapted model. All models were trained using the CycleGAN network. The source model was trained with a source domain dataset from scratch and tested with a target domain dataset. The target model was trained with a target domain dataset and tested with a target domain dataset. The combined model was trained with both source domain and target domain datasets, and tested with the target domain dataset. The adapted model used a transfer learning strategy to train a CycleGAN model with a source domain dataset and retrain the pre-trained model with a target domain dataset. MAE, RMSE, PSNR, and SSIM were used to quantitatively evaluate model performance on a target domain dataset. Results. The adapted model achieved best quantitative results of 74.56 8.61, 193.18 17.98, 28.30 0.83, and 0.84 0.01 for MAE, RMSE, PSNR, and SSIM using the T1-FLAIR dataset and 74.89 15.64, 195.73 31.29, 27.72 1.43, and 0.83 0.04 for MAE, RMSE, PSNR, and SSIM using the T1-POST dataset. The source model had the poorest performance. Conclusions. This work indicates high generalization ability to generate synthetic CT images from small training datasets of MR images using pre-trained CycleGAN. The quantitative results of the test data, including different scanning protocols and different acquisition centers, indicated the proof of this concept.

Original languageEnglish (US)
Article number025020
JournalBiomedical Physics and Engineering Express
Volume7
Issue number2
DOIs
StatePublished - Mar 2021

Keywords

  • CycleGAN
  • MR-only radiotherapy
  • MR-to-CT conversion
  • transfer learning

ASJC Scopus subject areas

  • General Nursing

Fingerprint

Dive into the research topics of 'Synthesizing CT images from MR images with deep learning: Model generalization for different datasets through transfer learning'. Together they form a unique fingerprint.

Cite this