HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Engineering / Faculty of Engineering >
Peer-reviewed Journal Articles, etc >

Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios

Files in This Item:
Final-Manuscript.pdf3.13 MBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/82600

Title: Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios
Authors: Vachmanus, Sirawich Browse this author
Ravankar, Ankit A. Browse this author
Emaru, Takanori Browse this author →KAKEN DB
Kobayashi, Yukinori Browse this author →KAKEN DB
Keywords: Roads
Snow
Image segmentation
Sensors
Semantics
Feature extraction
Training
Machine learning
semantic segmentation
thermal camera
data fusion
Issue Date: 1-Aug-2021
Publisher: IEEE (Institute of Electrical and Electronics Engineers)
Journal Title: IEEE sensors journal
Volume: 21
Issue: 15
Start Page: 16839
End Page: 16851
Publisher DOI: 10.1109/JSEN.2021.3077029
Abstract: In recent years, autonomous vehicle driving technology and advanced driver assistance systems have played a key role in improving road safety. However, weather conditions such as snow pose severe challenges for autonomous driving and are an active research area. Thanks to their superior reliability, the resilience of detection, and improved accuracy, advances in computation and sensor technology have paved the way for deep learning and neural network-based techniques that can replace the classical approaches. In this research, we investigate the semantic segmentation of roads in snowy environments. We propose a multi-modal fused RGB-T semantic segmentation utilizing a color (RGB) image and thermal map (T) as inputs for the network. This paper introduces a novel fusion module that combines the feature map from both inputs. We evaluate the proposed model on a new snow dataset that we collected and on other publicly available datasets. The segmentation results show that the proposed fused RGB-T input can segregate human subjects in snowy environments better than an RGB-only input. The fusion module plays a vital role in improving the efficiency of multiple input neural networks for person detection. Our results show that the proposed network can generate a higher success rate than other state-of-the-art networks. The combination of our fused module and pyramid supervision path generated the best results in both mean accuracy and mean intersection over union in every dataset.
Rights: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Type: article (author version)
URI: http://hdl.handle.net/2115/82600
Appears in Collections:工学院・工学研究院 (Graduate School of Engineering / Faculty of Engineering) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: Sirawich Vachmanus

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University