Purpose: To present a concept called artificial intelligence–assisted contour editing (AIACE) and demonstrate its feasibility. Materials and Methods: The conceptual workflow of AIACE is as follows: Given an initial contour that requires clinician editing, the clinician indicates where large editing is needed, and a trained deep learning model uses this input to update the contour. This process repeats until a clinically acceptable contour is achieved. In this retrospective, proof-of-concept study, the authors demonstrated the concept on two-dimensional (2D) axial CT images from three head-and-neck cancer datasets by simulating the interaction with the AIACE model to mimic the clinical environment. The input at each iteration was one mouse click on the desired location of the contour segment. Model performance is quantified with the Dice similarity coefficient (DSC) and 95th percentile of Hausdorff distance (HD95) based on three datasets with sample sizes of 10, 28, and 20 patients. Results: The average DSCs and HD95 values of the automatically generated initial contours were 0.82 and 4.3 mm, 0.73 and 5.6 mm, and 0.67 and 11.4 mm for the three datasets, which were improved to 0.91 and 2.1 mm, 0.86 and 2.5 mm, and 0.86 and 3.3 mm, respectively, with three mouse clicks. Each deep learning–based contour update required about 20 msec. Conclusion: The authors proposed the newly developed AIACE concept, which uses deep learning models to assist clinicians in editing contours efficiently and effectively, and demonstrated its feasibility by using 2D axial CT images from three head-and-neck cancer datasets.
- Convolutional Neural Network (CNN)
- Deep Learning Algorithms
ASJC Scopus subject areas
- Radiological and Ultrasound Technology
- Radiology Nuclear Medicine and imaging
- Artificial Intelligence