VeST: Very Sparse Tucker Factorization of Large-Scale Tensors

4 Apr 2019  ·  Moonjeong Park, Jun-Gi Jang, Sael Lee ·

Given a large tensor, how can we decompose it to sparse core tensor and factor matrices such that it is easier to interpret the results? How can we do this without reducing the accuracy? Existing approaches either output dense results or give low accuracy. In this paper, we propose VeST, a tensor factorization method for partially observable data to output a very sparse core tensor and factor matrices. VeST performs initial decomposition, determines unimportant entries in the decomposition results, removes the unimportant entries, and carefully updates the remaining entries. To determine unimportant entries, we define and use entry-wise 'responsibility' for the decomposed results. The entries are updated iteratively in a coordinate descent manner in parallel for scalable computation. Extensive experiments show that our method VeST is at least 2.2 times more sparse and at least 2.8 times more accurate compared to competitors. Moreover, VeST is scalable in terms of input order, dimension, and the number of observable entries. Thanks to VeST, we successfully interpret the result of real-world tensor data based on the sparsity pattern of the resulting factor matrices.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper