Cross-Modality Deep Feature Learning for Brain Tumor Segmentation

  • Dingwen Zhang
  • , Guohai Huang
  • , Qiang Zhang
  • , Jungong Han
  • , Junwei Han
  • , Yizhou Yu

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

265 Dyfyniadau (Scopus)
240 Wedi eu Llwytho i Lawr (Pure)

Crynodeb

Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multimodality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
Iaith wreiddiolSaesneg
Rhif yr erthygl107562
CyfnodolynPattern Recognition
Cyfrol110
Dyddiad ar-lein cynnar01 Tach 2020
Dynodwyr Gwrthrych Digidol (DOIs)
StatwsCyhoeddwyd - 01 Chwef 2021

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Cross-Modality Deep Feature Learning for Brain Tumor Segmentation'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn