Revisiting Feature Fusion for RGB-T Salient Object Detection

Qiang Zhang, Tonglin Xiao, Nianchang Huang, Dingwen Zhang, Jungong Han

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

116 Dyfyniadau (Scopus)
561 Wedi eu Llwytho i Lawr (Pure)

Crynodeb

While many RGB-based saliency detection algorithms have recently shown the capability of segmenting salient objects from an image, they still suffer from unsatisfactory performance when dealing with complex scenarios, insufficient illumination or occluded appearances. To overcome this problem, this article studies RGB-T saliency detection, where we take advantage of thermal modality's robustness against illumination and occlusion. To achieve this goal, we revisit feature fusion for mining intrinsic RGB-T saliency patterns and propose a novel deep feature fusion network, which consists of the multi-scale, multi-modality, and multi-level feature fusion modules. Specifically, the multi-scale feature fusion module captures rich contexture features from each modality feature, while the multi-modality and multi-level feature fusion modules integrate complementary features from different modality features and different level of features, respectively. To demonstrate the effectiveness of the proposed approach, we conduct comprehensive experiments on the RGB-T saliency detection benchmark. The experimental results demonstrate that our approach outperforms other state-of-the-art methods and the conventional feature fusion modules by a large margin.

Iaith wreiddiolSaesneg
Rhif yr erthygl9161021
Tudalennau (o-i)1804-1818
Nifer y tudalennau15
CyfnodolynIEEE Transactions on Circuits and Systems for Video Technology
Cyfrol31
Rhif cyhoeddi5
Dyddiad ar-lein cynnar06 Awst 2020
Dynodwyr Gwrthrych Digidol (DOIs)
StatwsCyhoeddwyd - 01 Mai 2021

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Revisiting Feature Fusion for RGB-T Salient Object Detection'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn