Memory-Efficient Parallel Computation of Tensor and Matrix Products for Big Tensor Decomposition

Niranjay Ravindran, Nicholas D. Sidiropoulos, Shaden Smith, and George Karypis
28th Asilomar Conference on Signals, 2014
Download Paper
Abstract
Low-rank tensor decomposition has many applications in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity.
Research topics: Data mining