SPLATT - Parallel Sparse Tensor Decomposition

Related research publications

  1. HPC formulations of optimization algorithms for tensor completion.

    Shaden Smith, Jongsoo Park, and George Karypis. Parallel Computing 74, 99–117 , 2018.

  1. Streaming Tensor Factorization for Infinite Data Sources.

    Shaden Smith, Kejun Huang, Nicholas Sidiropoulos, and George Karypis. SIAM Data Mining Conference, 2018.

  1. Accelerating the Tucker Decomposition with Compressed Sparse Tensors.

    Shaden Smith and George Karypis. European Conference on Parallel Processing, 653-668, 2017.

  1. Constrained Tensor Factorization with Accelerated AO-ADMM.

    Shaden Smith, Alec Beri, and George Karypis. 46th International Conference on Parallel Processing (ICPP), 2017.

  1. A Medium-Grained Algorithm for Distributed Sparse Tensor Factorization.

    Shaden Smith and George Karypis. 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2016.

  1. An Exploration of Optimization Algorithms for High Performance Tensor Completion.

    Shaden Smith, Jongsoo Park, and George Karypis. Supercomputing (SC16) , 2016.

  1. Tensor-Matrix Products with a Compressed Sparse Tensor.

    Shaden Smith and George Karypis. 5th Workshop on Irregular applications: Architectures and Algorithms, Supercomputing, 2015.

  1. DMS: Distributed Sparse Tensor Factorization with Alternating Least Squares.

    Shaden Smith and George Karypis. UMN-CS TR 15-007, 2015.

  1. SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication.

    Shaden Smith, Niranjay Ravindran, Nicholas D. Sidiropoulos, and George Karypis. 29th IEEE International Parallel & Distributed Processing Symposium, 2015.