HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Information Initiative Center >
Peer-reviewed Journal Articles, etc >

Simulation Study of Low Latency Network Architecture Using Mobile Edge Computing

Files in This Item:
E100.D_2016NTP0003.pdf864.07 kBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/71035

Title: Simulation Study of Low Latency Network Architecture Using Mobile Edge Computing
Authors: INTHARAWIJITR, Krittin Browse this author
IIDA, Katsuyoshi Browse this author
KOGA, Hiroyuki Browse this author
Keywords: mobile edge computing
5G
low latency
network architecture
processor sharing
performance evaluation
Issue Date: May-2017
Publisher: The Institute of Electronics, Information and Communication Engineers
Journal Title: IEICE Transactions on Information and Systems
Volume: E100.D
Issue: 5
Start Page: 963
End Page: 972
Publisher DOI: 10.1587/transinf.2016NTP0003
Abstract: Attaining extremely low latency service in 5G cellular networks is an important challenge in the communication research field. A higher QoS in the next-generation network could enable several unprecedented services, such as Tactile Internet, Augmented Reality, and Virtual Reality. However, these services will all need support from powerful computational resources provided through cloud computing. Unfortunately, the geolocation of cloud data centers could be insufficient to satisfy the latency aimed for in 5G networks. The physical distance between servers and users will sometimes be too great to enable quick reaction within the service time boundary. The problem of long latency resulting from long communication distances can be solved by Mobile Edge Computing (MEC), though, which places many servers along the edges of networks. MEC can provide shorter communication latency, but total latency consists of both the transmission and the processing times. Always selecting the closest edge server will lead to a longer computing latency in many cases, especially when there is a mass of users around particular edge servers. Therefore, the research studies the effects of both latencies. The communication latency is represented by hop count, and the computation latency is modeled by processor sharing (PS). An optimization model and selection policies are also proposed. Quantitative evaluations using simulations show that selecting a server according to the lowest total latency leads to the best performance, and permitting an over-latency barrier would further improve results.
Rights: Copyright ©2018 The Institute of Electronics, Information and Communication Engineers
Relation (URI): https://search.ieice.org/
Type: article
URI: http://hdl.handle.net/2115/71035
Appears in Collections:情報基盤センター (Information Initiative Center) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: 飯田 勝吉

 

Feedback - Hokkaido University