Region-Object Relation-Aware Dense Captioning via Transformer

Zhuang Shao, Jungong Han, Demetris Marnerides, Kurt Debattista

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

47 Dyfyniadau(SciVal)
259 Wedi eu Llwytho i Lawr (Pure)


Dense captioning provides detailed captions of complex visual scenes. While a number of successes have been achieved in recent years, there are still two broad limitations: 1) most existing methods adopt an encoder-decoder framework, where the contextual information is sequentially encoded using long short-term memory (LSTM). However, the forget gate mechanism of LSTM makes it vulnerable when dealing with a long sequence and 2) the vast majority of prior arts consider regions of interests (RoIs) equally important, thus failing to focus on more informative regions. The consequence is that the generated captions cannot highlight important contents of the image, which does not seem natural. To overcome these limitations, in this article, we propose a novel end-to-end transformer-based dense image captioning architecture, termed the transformer-based dense captioner (TDC). TDC learns the mapping between images and their dense captions via a transformer, prioritizing more informative regions. To this end, we present a novel unit, named region-object correlation score unit (ROCSU), to measure the importance of each region, where the relationships between detected objects and the region, alongside the confidence scores of detected objects within the region, are taken into account. Extensive experimental results and ablation studies on the standard dense-captioning datasets demonstrate the superiority of the proposed method to the state-of-the-art methods.

Iaith wreiddiolSaesneg
Nifer y tudalennau12
CyfnodolynIEEE Transactions on Neural Networks and Learning Systems
Dyddiad ar-lein cynnar11 Maw 2022
Dynodwyr Gwrthrych Digidol (DOIs)
StatwsE-gyhoeddi cyn argraffu - 11 Maw 2022

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Region-Object Relation-Aware Dense Captioning via Transformer'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn