GTT: Leveraging data characteristics for guiding the tensor train decomposition
The demand for searching, querying multimedia data such as image, video and audio is omnipresent, how to effectively access data for various applications is a critical task. Nevertheless, these data usually are encoded as multi-dimensional arrays, or tensor, and traditional data mining techniques mi...
Saved in:
Published in | Information systems (Oxford) Vol. 108; p. 102047 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Oxford
Elsevier Ltd
01.09.2022
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The demand for searching, querying multimedia data such as image, video and audio is omnipresent, how to effectively access data for various applications is a critical task. Nevertheless, these data usually are encoded as multi-dimensional arrays, or tensor, and traditional data mining techniques might be limited due to the curse of dimensionality. Tensor decomposition is proposed to alleviate this issue. Commonly used tensor decomposition algorithms include CP-decomposition (which seeks a diagonal core) and Tucker-decomposition (which seeks a dense core). Naturally, Tucker maintains more information, but due to the denseness of the core, it also is subject to exponential memory growth with the number of tensor modes. Tensor train (TT) decomposition addresses this problem by seeking a sequence of three-mode cores: but unfortunately, currently, there are no guidelines to select the decomposition sequence. In this paper, we propose a GTT method for guiding the tensor train in selecting the decomposition sequence. GTT leverages the data characteristics (including number of modes, length of the individual modes, density, distribution of mutual information, and distribution of entropy) as well as the target decomposition rank to pick a decomposition order that will preserve information. Experiments with various data sets demonstrate that GTT effectively guides the TT-decomposition process towards decomposition sequences that better preserve accuracy.
•We identify significant relationships among various data characteristics and the accuracies of different tensor train decomposition orders.•We propose four order selection strategies, (a) aggregate mutual information (AMI), (b) path mutual information (PMI), (c) inverse entropy (IE), and (d) number of parameters (NP), for tensor train decomposition.•We show that good tensor train orders can be selected through a hybrid (HYB) strategy that takes into account multiple characteristics of the 15 given categorical-valued data set and 3 given continuous-valued data set. |
---|---|
AbstractList | The demand for searching, querying multimedia data such as image, video and audio is omnipresent, how to effectively access data for various applications is a critical task. Nevertheless, these data usually are encoded as multi-dimensional arrays, or tensor, and traditional data mining techniques might be limited due to the curse of dimensionality. Tensor decomposition is proposed to alleviate this issue. Commonly used tensor decomposition algorithms include CP-decomposition (which seeks a diagonal core) and Tucker-decomposition (which seeks a dense core). Naturally, Tucker maintains more information, but due to the denseness of the core, it also is subject to exponential memory growth with the number of tensor modes. Tensor train (TT) decomposition addresses this problem by seeking a sequence of three-mode cores: but unfortunately, currently, there are no guidelines to select the decomposition sequence. In this paper, we propose a GTT method for guiding the tensor train in selecting the decomposition sequence. GTT leverages the data characteristics (including number of modes, length of the individual modes, density, distribution of mutual information, and distribution of entropy) as well as the target decomposition rank to pick a decomposition order that will preserve information. Experiments with various data sets demonstrate that GTT effectively guides the TT-decomposition process towards decomposition sequences that better preserve accuracy. The demand for searching, querying multimedia data such as image, video and audio is omnipresent, how to effectively access data for various applications is a critical task. Nevertheless, these data usually are encoded as multi-dimensional arrays, or tensor, and traditional data mining techniques might be limited due to the curse of dimensionality. Tensor decomposition is proposed to alleviate this issue. Commonly used tensor decomposition algorithms include CP-decomposition (which seeks a diagonal core) and Tucker-decomposition (which seeks a dense core). Naturally, Tucker maintains more information, but due to the denseness of the core, it also is subject to exponential memory growth with the number of tensor modes. Tensor train (TT) decomposition addresses this problem by seeking a sequence of three-mode cores: but unfortunately, currently, there are no guidelines to select the decomposition sequence. In this paper, we propose a GTT method for guiding the tensor train in selecting the decomposition sequence. GTT leverages the data characteristics (including number of modes, length of the individual modes, density, distribution of mutual information, and distribution of entropy) as well as the target decomposition rank to pick a decomposition order that will preserve information. Experiments with various data sets demonstrate that GTT effectively guides the TT-decomposition process towards decomposition sequences that better preserve accuracy. •We identify significant relationships among various data characteristics and the accuracies of different tensor train decomposition orders.•We propose four order selection strategies, (a) aggregate mutual information (AMI), (b) path mutual information (PMI), (c) inverse entropy (IE), and (d) number of parameters (NP), for tensor train decomposition.•We show that good tensor train orders can be selected through a hybrid (HYB) strategy that takes into account multiple characteristics of the 15 given categorical-valued data set and 3 given continuous-valued data set. |
ArticleNumber | 102047 |
Author | Sapino, Maria Luisa Li, Mao-Lin Candan, K. Selçuk |
Author_xml | – sequence: 1 givenname: Mao-Lin orcidid: 0000-0003-0134-4115 surname: Li fullname: Li, Mao-Lin email: maolinli@asu.edu organization: Arizona State University, Tempe AZ, USA – sequence: 2 givenname: K. Selçuk surname: Candan fullname: Candan, K. Selçuk email: candan@asu.edu organization: Arizona State University, Tempe AZ, USA – sequence: 3 givenname: Maria Luisa orcidid: 0000-0002-7621-3753 surname: Sapino fullname: Sapino, Maria Luisa email: mlsapino@di.unito.it organization: University of Turino, Turino, Italy |
BookMark | eNp9kM1LAzEQxYNUsFXvHhc8b83HbrLbmxStQqGXeg7TZLbN0m5qkhb8792lngQ9DW94vxnem5BR5zsk5IHRKaNMPrVTF6ecct5LTgt1RcasUiKXVMkRGVNBZV4IVd-QSYwtpZSXdT0mq8V6PcuWeMYAW9dtMwsJMrODACZhcDE5E7PGh2x7cnYwpB1mCbvYr1IA12UWjT8cfXTJ-e6OXDewj3j_M2_Jx-vLev6WL1eL9_nzMjeC85SXTAi2ActsRQVA3XBVcCmV2kgG0JRQNRLqYmNtBVg2YqMq2ZRMFtQoSw2KW_J4uXsM_vOEMenWn0LXv9RcVrVihZBV76IXlwk-xoCNPgZ3gPClGdVDbbrVrif62vSlth6RvxDjEgzRhrT7_8DZBcQ-9tlh0NE47AxaF9Akbb37G_4GhsaIMg |
CitedBy_id | crossref_primary_10_1109_TIM_2025_3545986 |
Cites_doi | 10.1007/BF02310791 10.1137/17M1140480 10.1109/ICDE.2015.7113355 10.1145/2339530.2339583 10.1103/PhysRevE.69.066138 10.1137/07070111X 10.1016/S0004-3702(97)00043-X 10.1109/ICDM.2002.1183893 10.1007/BF02289464 10.1016/j.neucom.2010.06.030 10.1002/j.1538-7305.1948.tb01338.x 10.1137/090752286 10.1103/PhysRev.106.620 |
ContentType | Journal Article |
Copyright | 2022 Copyright Elsevier Science Ltd. Sep 2022 |
Copyright_xml | – notice: 2022 – notice: Copyright Elsevier Science Ltd. Sep 2022 |
DBID | AAYXX CITATION 7SC 8FD E3H F2A JQ2 L7M L~C L~D |
DOI | 10.1016/j.is.2022.102047 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database Library & Information Sciences Abstracts (LISA) Library & Information Science Abstracts (LISA) ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Library and Information Science Abstracts (LISA) ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1873-6076 |
ExternalDocumentID | 10_1016_j_is_2022_102047 S0306437922000424 |
GrantInformation_xml | – fundername: pCAR: Discovering and Leveraging Plausibly Causal (p-causal) Relationships to Understand Complex Dynamic Systems grantid: NSF#1909555 – fundername: BIGDATA: Discovering Context-Sensitive Impact in Complex Systems grantid: NSF#1633381 – fundername: NSF – fundername: DataStorm: A Data Enabled System for End-to-End Disaster Planning and Response grantid: NSF#1610282 – fundername: FourCmodeling grantid: 690817 |
GroupedDBID | --K --M -~X .DC .~1 0R~ 13V 1B1 1~. 1~5 29I 4.4 457 4G. 5GY 5VS 63O 7-5 71M 77K 8P~ 9JN 9JO AAAKF AAAKG AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AARIN AAXUO AAYFN ABBOA ABFNM ABKBG ABMAC ABMVD ABTAH ABUCO ABXDB ABYKQ ACDAQ ACGFS ACHRH ACNNM ACNTT ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD AEBSH AEKER AENEX AFFNX AFKWA AFTJW AGHFR AGJBL AGUBO AGUMN AGYEJ AHHHB AHZHX AI. AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALEQD ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD APLSM ASPBG AVWKF AXJTR AZFZN BKOJK BLXMC BNSAS CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HAMUX HF~ HLZ HVGLF HZ~ H~9 IHE J1W KOM LG9 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SES SEW SPC SPCBC SSB SSD SSL SSV SSZ T5K TN5 UHS VH1 WUQ XSW ZCG ZY4 ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABJNI ABWVN ACRPL ACVFH ADCNI ADNMO ADVLN AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH 7SC 8FD E3H EFKBS F2A JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c322t-51331bad1d803aa9f27426677b61aaf5a8f6a94bdd8ae5f3b786f51640c7d0ce3 |
IEDL.DBID | .~1 |
ISSN | 0306-4379 |
IngestDate | Fri Jul 25 03:49:15 EDT 2025 Thu Apr 24 23:12:53 EDT 2025 Tue Jul 01 04:11:58 EDT 2025 Fri Feb 23 02:39:59 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Low-rank embedding Order selection Tensor train decomposition |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c322t-51331bad1d803aa9f27426677b61aaf5a8f6a94bdd8ae5f3b786f51640c7d0ce3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-7621-3753 0000-0003-0134-4115 |
PQID | 2689714368 |
PQPubID | 2035446 |
ParticipantIDs | proquest_journals_2689714368 crossref_primary_10_1016_j_is_2022_102047 crossref_citationtrail_10_1016_j_is_2022_102047 elsevier_sciencedirect_doi_10_1016_j_is_2022_102047 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | September 2022 2022-09-00 20220901 |
PublicationDateYYYYMMDD | 2022-09-01 |
PublicationDate_xml | – month: 09 year: 2022 text: September 2022 |
PublicationDecade | 2020 |
PublicationPlace | Oxford |
PublicationPlace_xml | – name: Oxford |
PublicationTitle | Information systems (Oxford) |
PublicationYear | 2022 |
Publisher | Elsevier Ltd Elsevier Science Ltd |
Publisher_xml | – name: Elsevier Ltd – name: Elsevier Science Ltd |
References | I. Jeon, E.E. Papalexakis, U. Kang, C. Faloutsos, Haten2: Billion-scale tensor decompositions, in: 2015 IEEE 31st ICDE, 2015, pp. 1047–1058. Zhao, Zhou, Xie, Zhang, Cichocki (b12) 2016 Ko, Lin, Li, Wong (b9) 2019 Li, Yu, Batselier (b20) 2019 R. Harshman, Foundations of the Parafac Procedure: Models and Conditions for an Explanatory Multi-Modal Factor Analysis, UCLA Working Papers in Phonetics, Vol. 16, 1970. Mickelin, Karaman (b21) 2018 Imaizumi, Maehara, Hayashi (b27) 2017 Batselier (b22) 2018 Kohavi, John (b23) 1997; 97 Yu, Liu (b24) 2003 Huang, Candan, Sapino (b18) 2016 Li, Candan, Sapino (b13) 2020; vol. 12440 Shannon (b29) 1948; 27 Kraskov, Stögbauer, Grassberger (b32) 2004; 69 Dash, Liu, Yao (b26) 1997 Oseledets (b6) 2011; 33 Chen, Jin, Kang, Feng, Yan (b8) 2018 Dua, Graff (b11) 2017 Kozachenko, Leonenko (b31) 1987; 23 Carroll, Chang (b1) 1970; 35 Kolda, Bader (b14) 2009; 51 U. Kang, E.E. Papalexakis, A. Harpale, C. Faloutsos, Gigatensor: scaling tensor analysis up by 100 times - algorithms and discoveries, in: KDD, 2012. Novikov, Trofimov, Oseledets (b10) 2016 T. Kolda, B. Bader, The TOPHITS model for higher-order web link analysis, in: Proceedings of Link Analysis, Counterterrorism and Security 2006, 2006. Yamaguchi, Hayashi (b5) 2017 Han, Kamber, Pei (b28) 2011 M. Dash, K. Choi, P. Scheuermann, Huan Liu, Feature selection for clustering - a filter solution, in: 2002 IEEE ICDM, 2002. Proceedings, 2002, pp. 115–122. Batselier, Yu, Daniel, Wong (b19) 2018; 39 Phan, Cichocki (b15) 2011; 74 Jaynes (b30) 1957; 106 Novikov, Podoprikhin, Osokin, Vetrov (b7) 2015 Tucker (b3) 1966; 31 Chen (10.1016/j.is.2022.102047_b8) 2018 Yamaguchi (10.1016/j.is.2022.102047_b5) 2017 Han (10.1016/j.is.2022.102047_b28) 2011 Mickelin (10.1016/j.is.2022.102047_b21) 2018 Tucker (10.1016/j.is.2022.102047_b3) 1966; 31 Yu (10.1016/j.is.2022.102047_b24) 2003 10.1016/j.is.2022.102047_b25 Jaynes (10.1016/j.is.2022.102047_b30) 1957; 106 Kraskov (10.1016/j.is.2022.102047_b32) 2004; 69 Phan (10.1016/j.is.2022.102047_b15) 2011; 74 Huang (10.1016/j.is.2022.102047_b18) 2016 Dash (10.1016/j.is.2022.102047_b26) 1997 Li (10.1016/j.is.2022.102047_b20) 2019 Carroll (10.1016/j.is.2022.102047_b1) 1970; 35 Li (10.1016/j.is.2022.102047_b13) 2020; vol. 12440 Batselier (10.1016/j.is.2022.102047_b19) 2018; 39 10.1016/j.is.2022.102047_b2 10.1016/j.is.2022.102047_b4 Novikov (10.1016/j.is.2022.102047_b7) 2015 Kolda (10.1016/j.is.2022.102047_b14) 2009; 51 Batselier (10.1016/j.is.2022.102047_b22) 2018 Ko (10.1016/j.is.2022.102047_b9) 2019 Shannon (10.1016/j.is.2022.102047_b29) 1948; 27 Zhao (10.1016/j.is.2022.102047_b12) 2016 10.1016/j.is.2022.102047_b16 Imaizumi (10.1016/j.is.2022.102047_b27) 2017 Novikov (10.1016/j.is.2022.102047_b10) 2016 10.1016/j.is.2022.102047_b17 Oseledets (10.1016/j.is.2022.102047_b6) 2011; 33 Kohavi (10.1016/j.is.2022.102047_b23) 1997; 97 Kozachenko (10.1016/j.is.2022.102047_b31) 1987; 23 Dua (10.1016/j.is.2022.102047_b11) 2017 |
References_xml | – volume: vol. 12440 start-page: 187 year: 2020 end-page: 202 ident: b13 article-title: [GTT:] Guiding the tensor train decomposition publication-title: Similarity Search and Applications - 13th International Conference – reference: M. Dash, K. Choi, P. Scheuermann, Huan Liu, Feature selection for clustering - a filter solution, in: 2002 IEEE ICDM, 2002. Proceedings, 2002, pp. 115–122. – reference: U. Kang, E.E. Papalexakis, A. Harpale, C. Faloutsos, Gigatensor: scaling tensor analysis up by 100 times - algorithms and discoveries, in: KDD, 2012. – year: 2019 ident: b20 article-title: Faster tensor train decomposition for sparse data – volume: 51 start-page: 455 year: 2009 end-page: 500 ident: b14 article-title: Tensor decompositions and applications publication-title: SIAM Rev. – start-page: 532 year: 1997 end-page: 539 ident: b26 article-title: Dimensionality reduction of unsupervised data publication-title: Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence – volume: 74 start-page: 1970 year: 2011 end-page: 1984 ident: b15 article-title: Parafac algorithms for large-scale problems publication-title: Neurocomputing – year: 2018 ident: b21 article-title: Tensor ring decomposition – start-page: 635 year: 2018 end-page: 641 ident: b8 article-title: Sharing residual units through collective tensor factorization to improve deep neural networks publication-title: IJCAI-18 – year: 2017 ident: b11 article-title: UCI machine learning repository – reference: R. Harshman, Foundations of the Parafac Procedure: Models and Conditions for an Explanatory Multi-Modal Factor Analysis, UCLA Working Papers in Phonetics, Vol. 16, 1970. – reference: T. Kolda, B. Bader, The TOPHITS model for higher-order web link analysis, in: Proceedings of Link Analysis, Counterterrorism and Security 2006, 2006. – start-page: 856 year: 2003 end-page: 863 ident: b24 article-title: Feature selection for high-dimensional data: A fast correlation-based filter solution publication-title: Proceedings, Twentieth International Conference on Machine Learning, Vol. 2 – volume: 27 start-page: 379 year: 1948 end-page: 423 ident: b29 article-title: A mathematical theory of communication publication-title: Bell Syst. Tech. J. – start-page: 1394 year: 2019 end-page: 1400 ident: b9 article-title: Misc: Mixed strategies crowdsourcing publication-title: IJCAI-19 – volume: 35 start-page: 283 year: 1970 end-page: 319 ident: b1 article-title: Analysis of individual differences in multidimensional scaling via an n-way generalization of eckart-young decomposition publication-title: Psychometrika – volume: 31 start-page: 279 year: 1966 end-page: 311 ident: b3 article-title: Some mathematical notes on three-mode factor analysis publication-title: Psychometrika – volume: 39 start-page: 1221 year: 2018 end-page: 1244 ident: b19 article-title: Computing low-rank approximations of large-scale matrices with the tensor network randomized svd publication-title: SIAM J. Matrix Anal. Appl. – year: 2018 ident: b22 article-title: The trouble with tensor ring decompositions – volume: 33 start-page: 2295 year: 2011 end-page: 2317 ident: b6 article-title: Tensor-train decomposition publication-title: SIAM J. Sci. Comput. – start-page: 1221 year: 2016 end-page: 1230 ident: b18 article-title: Bicp: Block-incremental cp decomposition with update sensitive refinement publication-title: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management – year: 2011 ident: b28 article-title: Data mining: Concepts and techniques publication-title: Morgan Kaufmann Series in Data Management Systems – start-page: 3930 year: 2017 end-page: 3939 ident: b27 article-title: On tensor train rank minimization : Statistical efficiency and scalable algorithm publication-title: Advances in Neural Information Processing Systems 30 – volume: 23 start-page: 95 year: 1987 end-page: 101 ident: b31 article-title: Sample estimate of the entropy of a random vector publication-title: Probl. Inf. Transm. – volume: 69 year: 2004 ident: b32 article-title: Estimating mutual information publication-title: Phys. Rev. E – year: 2016 ident: b12 article-title: Tensor ring decomposition – start-page: 442 year: 2015 end-page: 450 ident: b7 article-title: Tensorizing neural networks publication-title: Proceedings of the 28th International Conference on Neural Information Processing Systems, Vol. 1 – year: 2016 ident: b10 article-title: Exponential machines – volume: 97 start-page: 273 year: 1997 end-page: 324 ident: b23 article-title: Wrappers for feature subset selection publication-title: Artificial Intelligence – volume: 106 start-page: 620 year: 1957 end-page: 630 ident: b30 article-title: Information theory and statistical mechanics publication-title: Phys. Rev. – start-page: 3217 year: 2017 end-page: 3223 ident: b5 article-title: Tensor decomposition with missing indices publication-title: Proceedings of the 26th International Joint Conference on Artificial Intelligence – reference: I. Jeon, E.E. Papalexakis, U. Kang, C. Faloutsos, Haten2: Billion-scale tensor decompositions, in: 2015 IEEE 31st ICDE, 2015, pp. 1047–1058. – volume: 35 start-page: 283 issue: 3 year: 1970 ident: 10.1016/j.is.2022.102047_b1 article-title: Analysis of individual differences in multidimensional scaling via an n-way generalization of eckart-young decomposition publication-title: Psychometrika doi: 10.1007/BF02310791 – year: 2016 ident: 10.1016/j.is.2022.102047_b10 – volume: 39 start-page: 1221 issue: 3 year: 2018 ident: 10.1016/j.is.2022.102047_b19 article-title: Computing low-rank approximations of large-scale matrices with the tensor network randomized svd publication-title: SIAM J. Matrix Anal. Appl. doi: 10.1137/17M1140480 – year: 2019 ident: 10.1016/j.is.2022.102047_b20 – year: 2017 ident: 10.1016/j.is.2022.102047_b11 – start-page: 532 year: 1997 ident: 10.1016/j.is.2022.102047_b26 article-title: Dimensionality reduction of unsupervised data – ident: 10.1016/j.is.2022.102047_b17 doi: 10.1109/ICDE.2015.7113355 – year: 2016 ident: 10.1016/j.is.2022.102047_b12 – volume: 23 start-page: 95 issue: 1–2 year: 1987 ident: 10.1016/j.is.2022.102047_b31 article-title: Sample estimate of the entropy of a random vector publication-title: Probl. Inf. Transm. – ident: 10.1016/j.is.2022.102047_b16 doi: 10.1145/2339530.2339583 – start-page: 856 year: 2003 ident: 10.1016/j.is.2022.102047_b24 article-title: Feature selection for high-dimensional data: A fast correlation-based filter solution – volume: 69 year: 2004 ident: 10.1016/j.is.2022.102047_b32 article-title: Estimating mutual information publication-title: Phys. Rev. E doi: 10.1103/PhysRevE.69.066138 – year: 2018 ident: 10.1016/j.is.2022.102047_b22 – ident: 10.1016/j.is.2022.102047_b2 – start-page: 635 year: 2018 ident: 10.1016/j.is.2022.102047_b8 article-title: Sharing residual units through collective tensor factorization to improve deep neural networks – ident: 10.1016/j.is.2022.102047_b4 – start-page: 3930 year: 2017 ident: 10.1016/j.is.2022.102047_b27 article-title: On tensor train rank minimization : Statistical efficiency and scalable algorithm – volume: 51 start-page: 455 issue: 3 year: 2009 ident: 10.1016/j.is.2022.102047_b14 article-title: Tensor decompositions and applications publication-title: SIAM Rev. doi: 10.1137/07070111X – volume: 97 start-page: 273 issue: 1 year: 1997 ident: 10.1016/j.is.2022.102047_b23 article-title: Wrappers for feature subset selection publication-title: Artificial Intelligence doi: 10.1016/S0004-3702(97)00043-X – start-page: 1394 year: 2019 ident: 10.1016/j.is.2022.102047_b9 article-title: Misc: Mixed strategies crowdsourcing – year: 2018 ident: 10.1016/j.is.2022.102047_b21 – start-page: 3217 year: 2017 ident: 10.1016/j.is.2022.102047_b5 article-title: Tensor decomposition with missing indices – ident: 10.1016/j.is.2022.102047_b25 doi: 10.1109/ICDM.2002.1183893 – volume: vol. 12440 start-page: 187 year: 2020 ident: 10.1016/j.is.2022.102047_b13 article-title: [GTT:] Guiding the tensor train decomposition – volume: 31 start-page: 279 issue: 3 year: 1966 ident: 10.1016/j.is.2022.102047_b3 article-title: Some mathematical notes on three-mode factor analysis publication-title: Psychometrika doi: 10.1007/BF02289464 – volume: 74 start-page: 1970 issue: 11 year: 2011 ident: 10.1016/j.is.2022.102047_b15 article-title: Parafac algorithms for large-scale problems publication-title: Neurocomputing doi: 10.1016/j.neucom.2010.06.030 – start-page: 442 year: 2015 ident: 10.1016/j.is.2022.102047_b7 article-title: Tensorizing neural networks – volume: 27 start-page: 379 issue: 3 year: 1948 ident: 10.1016/j.is.2022.102047_b29 article-title: A mathematical theory of communication publication-title: Bell Syst. Tech. J. doi: 10.1002/j.1538-7305.1948.tb01338.x – start-page: 1221 year: 2016 ident: 10.1016/j.is.2022.102047_b18 article-title: Bicp: Block-incremental cp decomposition with update sensitive refinement – volume: 33 start-page: 2295 issue: 5 year: 2011 ident: 10.1016/j.is.2022.102047_b6 article-title: Tensor-train decomposition publication-title: SIAM J. Sci. Comput. doi: 10.1137/090752286 – year: 2011 ident: 10.1016/j.is.2022.102047_b28 article-title: Data mining: Concepts and techniques – volume: 106 start-page: 620 issue: 4 year: 1957 ident: 10.1016/j.is.2022.102047_b30 article-title: Information theory and statistical mechanics publication-title: Phys. Rev. doi: 10.1103/PhysRev.106.620 |
SSID | ssj0002599 |
Score | 2.3361566 |
Snippet | The demand for searching, querying multimedia data such as image, video and audio is omnipresent, how to effectively access data for various applications is a... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 102047 |
SubjectTerms | Algorithms Audio data Data mining Decomposition Information systems Low-rank embedding Mathematical analysis Multimedia Order selection Tensor train decomposition Tensors |
Title | GTT: Leveraging data characteristics for guiding the tensor train decomposition |
URI | https://dx.doi.org/10.1016/j.is.2022.102047 https://www.proquest.com/docview/2689714368 |
Volume | 108 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PT8MgFCbLvOjBH1PjdC4cvHio_Qml3pbFOXVOo1uyG4FCTY2Zi25X_3Z5LVVnzA6emlJoCDze-wgf30PoJExNlJVMgPKj2aBIqhzpx9rxMiKkJJrS4iLt7ZD2x9H1hExqqFvdhQFapfX9pU8vvLUtce1ourM8dx8B7YKaXgC3TaIANEGjKAYrP_v4pnkYeJ-UJwnUgdr2qLLkeOUg2B0EoF_gQYKVv0PTLyddRJ7eNtq0kBF3yl7toJqeNtBWlY4B29XZQBs_tAV30d3laHSOB9qYapGICAMXFKfL-szYQFb8tMghgGEDBTHw2U1RkTgCKw2Ec8vq2kPj3sWo23ds9gQnNYt07kDiFl8K5SvmhUIkGRzKUhrHkvpCmKlgGRVJJJViQpMslDGjGTG7Jy-NlZfqcB_Vp69TfYBwIJUfeDTTGQsiSURCPJWS2IAFA8dMOGsitxo4nlppcejoC684ZM88f-cw1Lwc6iY6_WoxK2U1VtQNq7ngS6bBjddf0apVTRu3y9J8pyyBhO-UHf7rp0doHd5KjlkL1edvC31sQMlctgura6O1TvdhcA_Pq5v-8BMJh-Bw |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED7xGICBRwHxxgMLQ2iejsOGEKVAWwaKxGbZsYOCUKmgrPx27hKHlxADq2NH1tl391n-_B3AQZRjltVCkfIjHlA0N54OUuv5RaK0Tizn1UPa_oB3b-PLu-RuCk6btzBEq3Sxv47pVbR2LW1nzfa4LNs3hHZJTS-k1yZxGE_DbIzuS2UMjt4-eR6I77P6KoF71N3dVdYkr5IUu8OQBAx8qrDye276EaWr1NNZhkWHGdlJPa0VmLKjFiw19RiYc88WLHwRF1yF6_Ph8Jj1LO7VqhIRIzIoy78LNDPErOz-taQMxhALMiK0Y1NVOYIZS4xzR-tag9vO2fC067nyCV6OXjrxqHJLoJUJjPAjpbKCbmU5T1PNA6VwLUTBVRZrY4SySRHpVPAiweOTn6fGz220DjOjp5HdABZqE4Q-L2whwlgnKkt8kycpogXEY5jPNqHdGE7mTlucJvooGxLZgyxfJJla1qbehMOPEeNaV-OPvlGzFvLb3pAY9v8YtdMsm3R-id-5yKjiOxdb__rpPsx1h_2e7F0MrrZhPqQcT8KPYgdmJs-vdhcRykTvVTvwHUEa4HA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GTT%3A+Leveraging+data+characteristics+for+guiding+the+tensor+train+decomposition&rft.jtitle=Information+systems+%28Oxford%29&rft.au=Li%2C+Mao-Lin&rft.au=Candan%2C+K.+Sel%C3%A7uk&rft.au=Sapino%2C+Maria+Luisa&rft.date=2022-09-01&rft.issn=0306-4379&rft.volume=108&rft.spage=102047&rft_id=info:doi/10.1016%2Fj.is.2022.102047&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_is_2022_102047 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0306-4379&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0306-4379&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0306-4379&client=summon |