Abstract
Semantic segmentation is paramount for autonomous vehicles to have a deeper understanding of the surrounding traffic environment and enhance safety. Deep neural networks (DNNs) have achieved remarkable performances in semantic segmentation. However, training such a DNN requires a large amount of labeled data at the pixel level. In practice, it is a labor-intensive task to manually annotate dense pixel-level labels. To tackle the problem associated with a small amount of labeled data, deep domain adaptation (DDA) methods have recently been developed to examine the use of synthetic driving scenes so as to significantly reduce the manual annotation cost. Despite remarkable advances, these methods, unfortunately, suffer from the generalizability problem that fails to provide a holistic representation of the mapping from the source image domain to the target image domain. In this article, we, therefore, develop a novel ensembled DDA to train models with different upsampling strategies, discrepancy, and segmentation loss functions. The models are, therefore, complementary with each other to achieve better generalization in the target image domain. Such a design does not only improves the adapted semantic segmentation performance but also strengthens the model reliability and robustness. Extensive experimental results demonstrate the superiorities of our approach over several state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 1496-1506 |
Number of pages | 11 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 14 |
Issue number | 4 |
Early online date | 05 Oct 2021 |
DOIs | |
Publication status | Published - 01 Dec 2022 |
Keywords
- Adaptation models
- Autonomous vehicles
- Deep learning
- Domain adaptation
- Feature extraction
- Generative adversarial network.
- Generative adversarial networks
- Image processing
- Image segmentation
- Integrated circuits
- Semantic segmentation
- Semantics
- Training