Inpainting Normal Maps for Lightstage data

Hancheng Zuo, Bernard Tiddeman

Research output: Contribution to conferencePaperpeer-review

15 Downloads (Pure)

Abstract

This paper presents a new method for inpainting of normal maps using a generative adversarial network (GAN) model. Normal maps can be acquired from a lightstage, and when used for performance capture, there is a risk of areas of the face being obscured by the movement (e.g. by arms, hair or props). Inpainting aims to fill missing areas of an image with plausible data. This work builds on previous work for general image inpainting, using a bow tie-like generator network and a discriminator network, and alternating training of the generator and discriminator. The generator tries to sythesise images that match the ground truth, and that can also fool the discriminator that is classifying real vs processed images. The discriminator is occasionally retrained to improve its performance at identifying the processed images. In addition, our method takes into account the nature of the normal map data, and so requires modification to the loss function. We replace a mean squared error loss with a cosine loss when training the generator. Due to the small amount of available training data available, even when using synthetic datasets, we require significant augmentation, which also needs to take account of the particular nature of the input data. Image flipping and in-plane rotations need to properly flip and rotate the normal vectors. During training, we monitored key performance metrics including average loss, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR) of the generator, alongside average loss and accuracy of the discriminator. Our analysis reveals that the proposed model generates high-quality, realistic inpainted normal maps, demonstrating the potential for application to performance capture. The results
of this investigation provide a baseline on which future researchers could build with more advanced networks and comparison with inpainting of the source images used to generate the normal maps.
Original languageEnglish
Pages45-52
Number of pages8
Publication statusPublished - 14 Sept 2023
EventComputer Graphics & Visual Computing - Aberystwyth University, Aberystwyth, United Kingdom of Great Britain and Northern Ireland
Duration: 14 Sept 202315 Sept 2023
https://cgvc.org.uk/CGVC2023/

Conference

ConferenceComputer Graphics & Visual Computing
Abbreviated titleCGVC
Country/TerritoryUnited Kingdom of Great Britain and Northern Ireland
CityAberystwyth
Period14 Sept 202315 Sept 2023
Internet address

Keywords

  • Computing methodologies
  • Reconstruction
  • Neural networks

Fingerprint

Dive into the research topics of 'Inpainting Normal Maps for Lightstage data'. Together they form a unique fingerprint.

Cite this