An Exploration of Optimization Algorithms for High Performance Tensor Completion

Shaden Smith, Jongsoo Park, and George Karypis
Supercomputing (SC16) , 2016
Download Paper
Abstract
Tensor completion is a powerful tool used to es- timate or recover missing values in multi-way data. It has seen great success in domains such as product recommendation and healthcare. Tensor completion is most often accomplished via low-rank sparse tensor factorization, a computationally expensive non-convex optimization problem which has only recently been studied in the context of parallel computing. In this work, we study three optimization algorithms that have been successfully applied to tensor completion: alternating least squares (ALS), stochastic gradient descent (SGD), and coordinate descent (CCD++). We explore opportunities for parallelism on shared- and distributed-memory systems and address challenges such as memory- and operation-efficiency, load balance, cache locality, and communication. Among our advancements are an SGD algorithm which combines stratification with asynchronous communication, an ALS algorithm rich in level-3 BLAS routines, and a communication-efficient CCD++ algorithm. We evaluate our optimizations on a variety of real datasets using a modern supercomputer and demonstrate speedups through 1024 cores. These improvements effectively reduce time-to-solution from hours to seconds on real-world datasets. We show that after our optimizations, ALS is advantageous on parallel systems of small- to-moderate scale, while both ALS and CCD++ will provide the lowest time-to-solution on large-scale distributed systems.
Comments
Best Student Paper Finalist.
Research topics: Parallel processing | SPLATT