HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

An Autonomous Learning-Based Algorithm for Joint Channel and Power Level Selection by D2D Pairs in Heterogeneous Cellular Networks

Files in This Item:
TCOM-TPS-16-0122.R2 Final.pdf1.92 MBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/63515

Title: An Autonomous Learning-Based Algorithm for Joint Channel and Power Level Selection by D2D Pairs in Heterogeneous Cellular Networks
Authors: Asheralieva, Alia Browse this author
Miyanaga, Yoshikazu Browse this author →KAKEN DB
Keywords: Device-to-device communication
heterogeneous networks
interference management
reinforcement learning
resource allocation
Issue Date: Sep-2016
Publisher: IEEE (Institute of Electrical and Electronics Engineers)
Journal Title: IEEE transactions on communications
Volume: 64
Issue: 9
Start Page: 3996
End Page: 4012
Publisher DOI: 10.1109/TCOMM.2016.2593468
Abstract: We study the problem of autonomous operation of the device-to-device (D2D) pairs in a heterogeneous cellular network with multiple base stations (BSs). The spectrum bands of the BSs (that may overlap with each other) comprise the sets of orthogonal wireless channels. We consider the following spectrum usage scenarios: 1) the D2D pairs transmit over the dedicated frequency bands and 2) the D2D pairs operate on the shared cellular/D2D channels. The goal of each device pair is to jointly select the wireless channel and power level to maximize its reward, defined as the difference between the achieved throughput and the cost of power consumption, constrained by its minimum tolerable signal-to-interference-plus-noise ratio requirements. We formulate this problem as a stochastic noncooperative game with multiple players (D2D pairs) where each player becomes a learning agent whose task is to learn its best strategy (based on the locally observed information) and develop a fully autonomous multi-agent Q-learning algorithm converging to a mixed-strategy Nash equilibrium. The proposed learning method is implemented in a long term evolution-advanced network and evaluated via the OPNET-based simulations. The algorithm shows relatively fast convergence and near-optimal performance after a small number of iterations.
Rights: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Type: article (author version)
URI: http://hdl.handle.net/2115/63515
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: Asheralieva Alia

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University