HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

Robust and Unified VLC Decoding System for Square Wave Quadrature Amplitude Modulation Using Deep Learning Approach

Files in This Item:

The file(s) associated with this item can be obtained from the following URL: https://doi.org/10.1109/ACCESS.2019.2952465


Title: Robust and Unified VLC Decoding System for Square Wave Quadrature Amplitude Modulation Using Deep Learning Approach
Authors: Alfarozi, Syukron Abu Ishaq Browse this author
Pasupa, Kitsuchart Browse this author
Hashizume, Hiromichi Browse this author →KAKEN DB
Woraratpanya, Kuntpong Browse this author
Sugimoto, Masanori Browse this author →KAKEN DB
Keywords: Visible light communication
image sensor communication (ISC)
SW-QAM
optical camera communication (OCC)
neural decoding
convolutional neural network (CNN)
deep learning
Issue Date: 8-Nov-2019
Publisher: IEEE (Institute of Electrical and Electronics Engineers)
Journal Title: IEEE Access
Volume: 7
Start Page: 163262
End Page: 163276
Publisher DOI: 10.1109/ACCESS.2019.2952465
Abstract: We have proposed a square wave quadrature amplitude modulation (SW-QAM) scheme for visible light communication (VLC) using an image sensor in our previous work. Here, we propose a robust and unified system by using a neural decoding method. This method offers essential SW-QAM decoding capabilities, such as LED localization, light interference elimination, and unknown parameter estimation, bundled into a single neural network model. This work makes use of a convolutional neural network (CNN) that has a capability in automatic learning of unknown parameters, especially when it deals with images as an input. The neural decoding method can provide good solutions for two difficult conditions that are not covered by our previous SW-QAM scheme: unfixed LED positions and multiple point spread functions (PSFs) of multiple LEDs. Responding to the above solutions, three recent CNN architectures-VGG, ResNet, and DenseNet-are modified to suit our scheme and other two small CNN architectures-VGG-like and MiniDenseNet-are proposed for low computing devices. Our experimental results show that the proposed neural decoding method performs better in terms of error rate than the theoretical decoding, an SW-QAM decoder with a Wiener filter, in different scenarios. Furthermore, we experiment on the problem of moving camera, i.e., the unfixed position of LED points. For this case, a spatial transformer network (STN) layer is added to the neural decoding method for solving the moving camera problem, and the method with the new layer achieves a remarkable result.
Rights: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Type: article
URI: http://hdl.handle.net/2115/76813
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University