HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

Learning intra-domain style-invariant representation for unsupervised domain adaptation of semantic segmentation

This item is licensed under:Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International

Files in This Item:
manuscript.pdf21.36 MBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/92818

Title: Learning intra-domain style-invariant representation for unsupervised domain adaptation of semantic segmentation
Authors: Li, Zongyao Browse this author
Togo, Ren Browse this author
Ogawa, Takahiro Browse this author →KAKEN DB
Haseyama, Miki Browse this author →KAKEN DB
Keywords: Style -invariant representation
Self-ensembling
Domain adaptation
Issue Date: 20-Jul-2022
Publisher: Elsevier
Journal Title: Pattern recognition
Volume: 132
Start Page: 108911
Publisher DOI: 10.1016/j.patcog.2022.108911
Abstract: A B S T R A C T In this paper, we aim to tackle the problem of unsupervised domain adaptation (UDA) of semantic seg-mentation and improve the UDA performance with a novel conception of learning intra-domain style -invariant representation. Previous UDA methods focused on reducing the inter-domain inconsistency between the source domain and the target domain. However, due to the different data distributions of the two domains, reducing the inter-domain inconsistency cannot ensure the generalization abil-ity of the trained model in the target domain. Therefore, to improve the UDA performance, we take into consideration the intra-domain diversity of the target domain for the first time in studies on UDA and aim to train the model to generalize well to the diverse intra-domain styles. To achieve this, we propose a self-ensembling method to learn the intra-domain style-invariant representation and we in-troduce a semantic-aware multimodal image-to-image translation model to obtain images with diver-sified intra-domain styles. Our method achieves state-of-the-art performance on two synthetic-to-real adaptation benchmarks, and we demonstrate the effectiveness of our method by conducting extensive experiments. (c) 2022 Elsevier Ltd. All rights reserved.
Rights: © 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
http://creativecommons.org/licenses/by-nc-nd/4.0/
Type: article (author version)
URI: http://hdl.handle.net/2115/92818
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: 李 宗曜

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University