TY - GEN
T1 - Suggestive annotation
T2 - 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017
AU - Yang, Lin
AU - Zhang, Yizhe
AU - Chen, Jianxu
AU - Zhang, Siyuan
AU - Chen, Danny Z.
N1 - Publisher Copyright:
© Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc.), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.
AB - Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc.), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.
UR - http://www.scopus.com/inward/record.url?scp=85029518208&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85029518208&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-66179-7_46
DO - 10.1007/978-3-319-66179-7_46
M3 - Conference contribution
AN - SCOPUS:85029518208
SN - 9783319661780
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 399
EP - 407
BT - Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 - 20th International Conference, Proceedings
A2 - Maier-Hein, Lena
A2 - Franz, Alfred
A2 - Jannin, Pierre
A2 - Duchesne, Simon
A2 - Descoteaux, Maxime
A2 - Collins, D. Louis
PB - Springer Verlag
Y2 - 11 September 2017 through 13 September 2017
ER -