Accelerating the Tucker Decomposition with Compressed Sparse Tensors

Shaden Smith and George Karypis
European Conference on Parallel Processing, 653-668, 2017
Download Paper
Abstract
The Tucker decomposition is a higher-order analogue of the singular value decomposition and is a popular method of performing analysis on multi-way data (tensors). Computing the Tucker decomposition of a sparse tensor is demanding in terms of both memory and computational resources. The primary kernel of the factorization is a chain of tensor-matrix multiplications (TTMc). State-of-the-art algorithms accelerate the underlying computations by trading o memory to memoize the intermediate results of TTMc in order to reuse them across iterations. We present an algorithm based on a compressed data structure for sparse tensors and show that many computational redundancies during TTMc can be identifi ed and pruned without the memory overheads of memoization. In addition, our algorithm can further reduce the number of operations by exploiting an additional amount of user-specifi ed memory.We evaluate our algorithm on a collection of real-world and synthetic datasets and demonstrate up to 20.7x speedup while using 28.5x less memory than the state-of-the-art parallel algorithm.
Comments
Distinguished Paper Award
Research topics: Data mining | Parallel processing | SPLATT