Exploring a unified low rank representation for multi-focus image fusion

Qiang Zhang, Fan Wang, Yongjiang Luo*, Jungong Han

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (SciVal)
113 Downloads (Pure)


Recent years have witnessed a trend that uses image representation models, including sparse representation (SR), low-rank representation (LRR) and their variants for multi-focus image fusion. Despite the thrilling preliminary results, existing methods conduct the fusion patch by patch, leading to insufficient consideration of the spatial consistency among the image patches within a local region or an object. As a result, not only the spatial artifacts are easily introduced to the fused image but also the “jagged” artifacts frequently arise on the boundaries between the focused regions and the de-focused regions, which is an inherent problem in these patch-based fusion methods.Aiming to address the above problems, we propose, in this paper,a new multi-focus image fusion method integrating super-pixel clustering and a unified LRR (ULRR) model. The entire algorithm is carried out in three steps. In the first step, the source image is segmented into a few super-pixels with irregular sizes, rather than patches with regular sizes, to diminish the “jagged” artifacts and meanwhile to preserve the boundaries of objects on the fused image. Secondly, a super-pixel clustering-based fusion strategy is employed to further reduce the spatial artifacts in the fused images. This is achieved by using a proposed ULRR model, which imposes the low-rank constraints onto each super-pixel cluster.Thisis apparently more reasonable for those images with complicated scenes. Moreover, a Laplacianregularization term is incorporated in the proposed ULRR model to ensure the spatial consistency among the super-pixels with the same cluster. Finally, a measure of focus for each super-pixel is defined to seek the focused as well as de-focused regions in thesource image via jointly using representation coefficients and sparse errors derived from the proposed ULRR model. Extensive experiments have been conducted and the results demonstrate the superiorities of the proposed fusion method in diminishing the spatial artifactsin the fused image and the “jagged” boundary artifacts between the focused and de-focused regions, compared to the state-of-the-art fusion algorithms.

Original languageEnglish
Article number107752
JournalPattern Recognition
Early online date19 Feb 2021
Publication statusPublished - 01 May 2021


  • Multi-focus image fusion
  • Spatial consistency
  • Super-pixel clustering
  • Unified low-rank representation


Dive into the research topics of 'Exploring a unified low rank representation for multi-focus image fusion'. Together they form a unique fingerprint.

Cite this