Low-Tubal-Rank Tensor Completion Using Alternating Minimization
The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data. In this paper, we study the low-tubal-rank tensor completion problem, i.e., to recover a third-order tensor by observing a subset of its elements selected uniformly at random. We propose a fast iterative...
Saved in:
Published in | IEEE transactions on information theory Vol. 66; no. 3; pp. 1714 - 1737 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data. In this paper, we study the low-tubal-rank tensor completion problem, i.e., to recover a third-order tensor by observing a subset of its elements selected uniformly at random. We propose a fast iterative algorithm, called Tubal-AltMin, that is inspired by a similar approach for low-rank matrix completion. The unknown low-tubal-rank tensor is represented as the product of two much smaller tensors with the low-tubal-rank property being automatically incorporated, and Tubal-AltMin alternates between estimating those two tensors using tensor least squares minimization. First, we note that tensor least squares minimization is different from its matrix counterpart and nontrivial as the circular convolution operator of the low-tubal-rank tensor model is intertwined with the sub-sampling operator. Secondly, the theoretical performance guarantee is challenging since Tubal-AltMin is iterative and nonconvex. We prove that 1) Tubal-AltMin generates a best rank-r approximate up to any predefined accuracy ε at an exponential rate, and 2) for an n × n × k tensor M with tubal-rank r ≪ n, the required sampling complexity is O((nr 2 kIIMII F2 log 3 n)/σ 2 rk), where σ̅rk is the rk-th singular value of the block diagonal matrix representation of M in the frequency domain, and the computational complexity is O(n 2 r 2 k 3 logn log(n/ε)). Finally, on both synthetic data and real-world video data, evaluation results show that compared with tensor-nuclear norm minimization using alternating direction method of multipliers (TNN-ADMM), Tubal-AltMin-Simple (a simplified implementation of Tubal-AltMin) improves the recovery error by several orders of magnitude. In experiments, Tubal-AltMin-Simple is faster than TNN-ADMM by a factor of 5 for a 200 × 200 × 20 tensor. |
---|---|
ISSN: | 0018-9448 1557-9654 |
DOI: | 10.1109/TIT.2019.2959980 |