HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Information Science and Technology / Faculty of Information Science and Technology >
Peer-reviewed Journal Articles, etc >

Sparse random feature maps for the item-multiset kernel

Files in This Item:

The file(s) associated with this item can be obtained from the following URL: https://doi.org/10.1016/j.neunet.2021.06.024


Title: Sparse random feature maps for the item-multiset kernel
Authors: Atarashi, Kyohei Browse this author
Oyama, Satoshi Browse this author →KAKEN DB
Kurihara, Masahito Browse this author →KAKEN DB
Keywords: Kernel method
Random feature
Issue Date: Nov-2021
Publisher: Elsevier
Journal Title: NEURAL NETWORKS
Volume: 143
Start Page: 500
End Page: 514
Publisher DOI: 10.1016/j.neunet.2021.06.024
Abstract: Random feature maps are a promising tool for large-scale kernel methods. Since most random feature maps generate dense random features causing memory explosion, it is hard to apply them to very-large-scale sparse datasets. The factorization machines and related models, which use feature combinations efficiently, scale well for large-scale sparse datasets and have been used in many applications. However, their optimization problems are typically non-convex. Therefore, although they are optimized by using gradient-based iterative methods, such methods cannot find global optimum solutions in general and require a large number of iterations for convergence. In this paper, we define the item-multiset kernel, which is a generalization of the itemset kernel and dot product kernels. Unfortunately, random feature maps for the itemset kernel and dot product kernels cannot approximate the item-multiset kernel. We thus develop a method that converts an item-multiset kernel into an itemset kernel, enabling the item-multiset kernel to be approximated by using a random feature map for the itemset kernel. We propose two random feature maps for the itemset kernel, which run faster and are more memory efficient than the existing feature map for the itemset kernel. They also generate sparse random features when the original (input) feature vector is sparse and thus linear models using proposed methods. Experiments using real-world datasets demonstrated the effectiveness of the proposed methodology: linear models using the proposed random feature maps ran from 10 to 100 times faster than ones based on existing methods. (C) 2021 The Author(s). Published by Elsevier Ltd.
Type: article
URI: http://hdl.handle.net/2115/83216
Appears in Collections:情報科学院・情報科学研究院 (Graduate School of Information Science and Technology / Faculty of Information Science and Technology) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University