HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

Favorite Video Classification Based on Multimodal Bidirectional LSTM

Files in This Item:
08496751_002.pdf9.83 MBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/72229

Title: Favorite Video Classification Based on Multimodal Bidirectional LSTM
Authors: Ogawa, Takahiro Browse this author →KAKEN DB
Sasaka, Yuma Browse this author
Maeda, Keisuke Browse this author
Haseyama, Miki Browse this author →KAKEN DB
Keywords: Multimodal fusion
video classification
LSTM
EEG
Issue Date: 2018
Publisher: IEEE (Institute of Electrical and Electronics Engineers)
Journal Title: IEEE Access
Volume: 6
Start Page: 61401
End Page: 61409
Publisher DOI: 10.1109/ACCESS.2018.2876710
Abstract: Video classification based on the user's preference (information of what a user likes: WUL) is important for realizing human-centered video retrieval. A better understanding of the rationale of WUL would greatly contribute to the support for successful video retrieval. However, a few studies have shown the relationship between information of what a user watches and WUL. A new method that classifies videos on the basis of WUL using video features and electroencephalogram (EEG) signals collaboratively with a multimodal bidirectional Long Short-Term Memory (Bi-LSTM) network is presented in this paper. To the best of our knowledge, there has been no study on WUL-based video classification using video features and EEG signals collaboratively with LSTM. First, we newly apply transfer learning to the WUL-based video classification since the number of labels (liked or not liked) attached to videos by users is small, and it is difficult to classify videos based on WUL. Furthermore, we conduct a user study for showing that the representation of psychophysiological signals calculated from Bi-LSTM is effective for the WUL-based video classification. Experimental results showed that our deep neural network feature representations can distinguish WUL for each subject.
Rights: © 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Type: article
URI: http://hdl.handle.net/2115/72229
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: 小川 貴弘

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University