TY - JOUR
T1 - Region-Object Relation-Aware Dense Captioning via Transformer
AU - Shao, Zhuang
AU - Han, Jungong
AU - Marnerides, Demetris
AU - Debattista, Kurt
N1 - Publisher Copyright:
IEEE
PY - 2022/3/11
Y1 - 2022/3/11
N2 - Dense captioning provides detailed captions of complex visual scenes. While a number of successes have been achieved in recent years, there are still two broad limitations: 1) most existing methods adopt an encoder-decoder framework, where the contextual information is sequentially encoded using long short-term memory (LSTM). However, the forget gate mechanism of LSTM makes it vulnerable when dealing with a long sequence and 2) the vast majority of prior arts consider regions of interests (RoIs) equally important, thus failing to focus on more informative regions. The consequence is that the generated captions cannot highlight important contents of the image, which does not seem natural. To overcome these limitations, in this article, we propose a novel end-to-end transformer-based dense image captioning architecture, termed the transformer-based dense captioner (TDC). TDC learns the mapping between images and their dense captions via a transformer, prioritizing more informative regions. To this end, we present a novel unit, named region-object correlation score unit (ROCSU), to measure the importance of each region, where the relationships between detected objects and the region, alongside the confidence scores of detected objects within the region, are taken into account. Extensive experimental results and ablation studies on the standard dense-captioning datasets demonstrate the superiority of the proposed method to the state-of-the-art methods.
AB - Dense captioning provides detailed captions of complex visual scenes. While a number of successes have been achieved in recent years, there are still two broad limitations: 1) most existing methods adopt an encoder-decoder framework, where the contextual information is sequentially encoded using long short-term memory (LSTM). However, the forget gate mechanism of LSTM makes it vulnerable when dealing with a long sequence and 2) the vast majority of prior arts consider regions of interests (RoIs) equally important, thus failing to focus on more informative regions. The consequence is that the generated captions cannot highlight important contents of the image, which does not seem natural. To overcome these limitations, in this article, we propose a novel end-to-end transformer-based dense image captioning architecture, termed the transformer-based dense captioner (TDC). TDC learns the mapping between images and their dense captions via a transformer, prioritizing more informative regions. To this end, we present a novel unit, named region-object correlation score unit (ROCSU), to measure the importance of each region, where the relationships between detected objects and the region, alongside the confidence scores of detected objects within the region, are taken into account. Extensive experimental results and ablation studies on the standard dense-captioning datasets demonstrate the superiority of the proposed method to the state-of-the-art methods.
KW - Decoding
KW - Dense image captioning
KW - Feature extraction
KW - Object detection
KW - region-object correlation score unit (ROCSU)
KW - Task analysis
KW - Training
KW - transformer-based dense image captioner.
KW - Transformers
KW - Visualization
KW - Artificial Intelligence
KW - Computer Networks and Communications
KW - Computer Science Applications
KW - Software
UR - http://www.scopus.com/inward/record.url?scp=85126297679&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3152990
DO - 10.1109/TNNLS.2022.3152990
M3 - Article
C2 - 35275824
AN - SCOPUS:85126297679
SN - 2162-237X
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -