Faster TKD: Towards Lightweight Decomposition for Large-Scale Tensors With Randomized Block Sampling
The Tucker Decomposition (TKD) is able to provide the low-dimensional and informative representations of real-world large-scale tensorial data, which are necessary to extract potential features and enhance the original data. However, computing such decomposition directly for a dense tensor is usuall...
Saved in:
Published in | IEEE transactions on knowledge and data engineering Vol. 35; no. 8; pp. 7966 - 7979 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The Tucker Decomposition (TKD) is able to provide the low-dimensional and informative representations of real-world large-scale tensorial data, which are necessary to extract potential features and enhance the original data. However, computing such decomposition directly for a dense tensor is usually computationally elusive, due to the repetitive operations of computing large-scale tensor-matrix product. Instead of direct decomposition, this paper proposes an efficient algorithm for seeking the Faster TKD of the large-scale tensor, which is a lightweight decomposition approach based on the technique of randomized sampling. The proposed algorithm first converts the original large-scale tensor into a small-scale subtensor via full-mode sampling operation, and then the core tensor of TKD can be computed directly based on the subtensor with low complexity. Finally, an approximate TKD of the original large-scale tensor can be obtained after sequentially computing approximate full-mode factor matrices. A theoretical error analysis is provided to show that the approximation error approximates zero with high probability, and the proposed algorithm is verified based on real tensorial data of <inline-formula><tex-math notation="LaTeX">23821.24GB</tex-math> <mml:math><mml:mrow><mml:mn>23821</mml:mn><mml:mo>.</mml:mo><mml:mn>24</mml:mn><mml:mi>G</mml:mi><mml:mi>B</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="yang-ieq1-3218846.gif"/> </inline-formula>. |
---|---|
ISSN: | 1041-4347 1558-2191 |
DOI: | 10.1109/TKDE.2022.3218846 |