Abstract
Dense captioning provides detailed captions of complex visual scenes. While a number of successes have been achieved in recent years, there are still two broad limitations: 1) most existing methods adopt an encoder-decoder framework, where the contextual information is sequentially encoded using long short-term memory (LSTM). However, the forget gate mechanism of LSTM makes it vulnerable when dealing with a long sequence and 2) the vast majority of prior arts consider regions of interests (RoIs) equally important, thus failing to focus on more informative regions. The consequence is that the generated captions cannot highlight important contents of the image, which does not seem natural. To overcome these limitations, in this article, we propose a novel end-to-end transformer-based dense image captioning architecture, termed the transformer-based dense captioner (TDC). TDC learns the mapping between images and their dense captions via a transformer, prioritizing more informative regions. To this end, we present a novel unit, named region-object correlation score unit (ROCSU), to measure the importance of each region, where the relationships between detected objects and the region, alongside the confidence scores of detected objects within the region, are taken into account. Extensive experimental results and ablation studies on the standard dense-captioning datasets demonstrate the superiority of the proposed method to the state-of-the-art methods.
Original language | English |
---|---|
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Early online date | 11 Mar 2022 |
DOIs | |
Publication status | E-pub ahead of print - 11 Mar 2022 |
Keywords
- Decoding
- Dense image captioning
- Feature extraction
- Object detection
- region-object correlation score unit (ROCSU)
- Task analysis
- Training
- transformer-based dense image captioner.
- Transformers
- Visualization
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications
- Software