Pixelated Semantic Colorization

Jiaojiao Zhao, Jungong Han, Ling Shao, Cees Snoek

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

60 Dyfyniadau (Scopus)
32 Wedi eu Llwytho i Lawr (Pure)

Crynodeb

While many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on Pascal VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art.
Iaith wreiddiolSaesneg
Tudalennau (o-i)818–834
Nifer y tudalennau17
CyfnodolynInternational Journal of Computer Vision
Cyfrol128
Rhif cyhoeddi4
Dyddiad ar-lein cynnar07 Rhag 2019
Dynodwyr Gwrthrych Digidol (DOIs)
StatwsCyhoeddwyd - 01 Ebr 2020
Cyhoeddwyd yn allanolIe

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Pixelated Semantic Colorization'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn