Dual Variational Knowledge Attention for Class Incremental Vision Transformer

Haoran Duan, Rui Sun, Varun Ojha, Tejal Shah, Zhuoxu Huang, Zizhou Ouyang, Yawen Huang, Yang Long, Rajiv Ranjan

Research output: Chapter in Book/Report/Conference proceedingConference Proceeding (Non-Journal item)

Abstract

Class incremental learning (CIL) strives to emulate the human cognitive process of continuously learning and adapting to new tasks while retaining knowledge from past experiences. Despite significant advancements in this field, Transformer-based models have not fully leveraged the potential of attention mechanisms to balance the transferable knowledge between tokens and the associated information. This paper addresses this gap by using a dual variational knowledge attention (DVKA) mechanism within a Transformer-based encoder-decoder framework, tailored for CIL. DVKA mechanism aims to manage the information flow through the attention maps, ensuring a balanced representation of all classes, and mitigating the risk of information dilution as new classes are incrementally introduced. This method, leverage the information bottleneck and mutual information principle, selectively filters less relevant information, directing the model's focus towards the most significant details for each class. The DVKA is designed with two distinct attentions: one focused on the feature level and the other on the token dimension. The feature-focused attention aims to purify the complex nature of various classification tasks, ensuring a comprehensive representation of both old and new tasks. The token-focused attention mechanism highlights specific tokens, facilitating local discrimination among disparate patches and fostering global coordination for a spectrum of task tokens. Our work is a major stride towards improving transformer models for class incremental learning, presenting a theoretical rationale and effective experimental results on three widely-used datasets.

Original languageEnglish
Title of host publication2024 International Joint Conference on Neural Networks (IJCNN)
PublisherIEEE Press
Number of pages8
Volume30
ISBN (Electronic)9798350359312
DOIs
Publication statusPublished - 30 Jun 2024
Event2024 International Joint Conference on Neural Networks, IJCNN 2024 - Yokohama, Japan
Duration: 30 Jun 202405 Jul 2024

Publication series

Name2024 International Joint Conference on Neural Networks (IJCNN)
PublisherIEEE Press

Conference

Conference2024 International Joint Conference on Neural Networks, IJCNN 2024
Country/TerritoryJapan
CityYokohama
Period30 Jun 202405 Jul 2024

Keywords

  • Continual Learning
  • Transformers
  • Vision Transformer

Fingerprint

Dive into the research topics of 'Dual Variational Knowledge Attention for Class Incremental Vision Transformer'. Together they form a unique fingerprint.

Cite this