Robust Correlation Tracking for UAV Videos via Feature Fusion and Saliency Proposals

Xizhe Xue, Ying Li, Hao Dong, Qiang Shen

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)
129 Downloads (Pure)

Abstract

Following the growing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on object tracking using videos recorded from UAVs. However, tracking from UAV videos poses many challenges due to platform motion, including background clutter, occlusion, and illumination variation. This paper tackles these challenges by proposing a correlation filter-based tracker with feature fusion and saliency proposals. First, we integrate multiple feature types such as dimensionality-reduced color name (CN) and histograms of oriented gradient (HOG) features to improve the performance of correlation filters for UAV videos. Yet, a fused feature acting as a multivector descriptor cannot be directly used in prior correlation filters. Therefore, a fused feature correlation filter is proposed that can directly convolve with a multivector descriptor, in order to obtain a single-channel response that indicates the location of an object. Furthermore, we introduce saliency proposals as re-detector to reduce background interference caused by occlusion or any distracter. Finally, an adaptive template-update strategy according to saliency information is utilized to alleviate possible model drifts. Systematic comparative evaluations performed on two popular UAV datasets show the effectiveness of the proposed approach
Original languageEnglish
Article number1644
JournalRemote Sensing
Volume10
Issue number10
DOIs
Publication statusPublished - 16 Oct 2018

Keywords

  • Correlation filter
  • Feature fusion
  • Saliency detection
  • UAV video
  • Visual tracking

Fingerprint

Dive into the research topics of 'Robust Correlation Tracking for UAV Videos via Feature Fusion and Saliency Proposals'. Together they form a unique fingerprint.

Cite this