TY - JOUR
T1 - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
AU - Zhang, Dingwen
AU - Huang, Guohai
AU - Zhang, Qiang
AU - Han, Jungong
AU - Han, Junwei
AU - Yu, Yizhou
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grants 61876140 and 61773301 , the Fundamental Research Funds for the Central Universities under Grant JBZ170401 , and the China Postdoctoral Support Scheme for Innovative Talents under Grant BX20180236.
Publisher Copyright:
© 2020 Elsevier Ltd
PY - 2021/2/1
Y1 - 2021/2/1
N2 - Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multimodality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
AB - Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multimodality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
KW - Brain tumor segmentation
KW - Cross-modality feature fusion
KW - Cross-modality feature transition
KW - Feature learning
UR - http://www.scopus.com/inward/record.url?scp=85089004170&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2020.107562
DO - 10.1016/j.patcog.2020.107562
M3 - Article
SN - 0031-3203
VL - 110
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 107562
ER -