Performance guarantees of transformed Schatten-1 regularization for exact low-rank matrix recovery
Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such as recommender systems, image processing, and deep learning. Inspired by compressive sensing, the rank minimization can be relaxed to nuclear...
Saved in:
Published in | International journal of machine learning and cybernetics Vol. 12; no. 12; pp. 3379 - 3395 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.12.2021
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such as recommender systems, image processing, and deep learning. Inspired by compressive sensing, the rank minimization can be relaxed to nuclear norm minimization. However, such a method treats all singular values of target matrix equally. To address this issue, recently the transformed Schatten-1 (TS1) penalty function was proposed and utilized to construct low-rank matrix recovery models. Unfortunately, the method for TS1-based models cannot provide both convergence accuracy and convergence speed. To alleviate such problems, this paper further investigates the basic properties of TS1 penalty function. And we describe a novel algorithm, which we called ATS1PGA, that is highly efficient in solving low-rank matrix recovery problems at a convergence rate of
O
(1/
N
), where
N
denotes the iterate count. In addition, we theoretically prove that the original rank minimization problem can be equivalently transformed into the TS1 optimization problem under certain conditions. Finally, extensive experimental results on real image data sets show that our proposed algorithm outperforms state-of-the-art methods in both accuracy and efficiency. In particular, our proposed algorithm is about 30 times faster than TS1 algorithm in solving low-rank matrix recovery problems. |
---|---|
AbstractList | Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such as recommender systems, image processing, and deep learning. Inspired by compressive sensing, the rank minimization can be relaxed to nuclear norm minimization. However, such a method treats all singular values of target matrix equally. To address this issue, recently the transformed Schatten-1 (TS1) penalty function was proposed and utilized to construct low-rank matrix recovery models. Unfortunately, the method for TS1-based models cannot provide both convergence accuracy and convergence speed. To alleviate such problems, this paper further investigates the basic properties of TS1 penalty function. And we describe a novel algorithm, which we called ATS1PGA, that is highly efficient in solving low-rank matrix recovery problems at a convergence rate of O(1/N), where N denotes the iterate count. In addition, we theoretically prove that the original rank minimization problem can be equivalently transformed into the TS1 optimization problem under certain conditions. Finally, extensive experimental results on real image data sets show that our proposed algorithm outperforms state-of-the-art methods in both accuracy and efficiency. In particular, our proposed algorithm is about 30 times faster than TS1 algorithm in solving low-rank matrix recovery problems. Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such as recommender systems, image processing, and deep learning. Inspired by compressive sensing, the rank minimization can be relaxed to nuclear norm minimization. However, such a method treats all singular values of target matrix equally. To address this issue, recently the transformed Schatten-1 (TS1) penalty function was proposed and utilized to construct low-rank matrix recovery models. Unfortunately, the method for TS1-based models cannot provide both convergence accuracy and convergence speed. To alleviate such problems, this paper further investigates the basic properties of TS1 penalty function. And we describe a novel algorithm, which we called ATS1PGA, that is highly efficient in solving low-rank matrix recovery problems at a convergence rate of O (1/ N ), where N denotes the iterate count. In addition, we theoretically prove that the original rank minimization problem can be equivalently transformed into the TS1 optimization problem under certain conditions. Finally, extensive experimental results on real image data sets show that our proposed algorithm outperforms state-of-the-art methods in both accuracy and efficiency. In particular, our proposed algorithm is about 30 times faster than TS1 algorithm in solving low-rank matrix recovery problems. |
Author | Wang, Wendong Chen, Wu Luo, Xiaohu Hu, Dong Wang, Zhi Wang, Jianjun |
Author_xml | – sequence: 1 givenname: Zhi orcidid: 0000-0002-2167-830X surname: Wang fullname: Wang, Zhi email: chiw@swu.edu.cn organization: College of Computer and Information Science, Southwest University – sequence: 2 givenname: Dong surname: Hu fullname: Hu, Dong organization: College of Computer and Information Science, Southwest University – sequence: 3 givenname: Xiaohu surname: Luo fullname: Luo, Xiaohu organization: College of Computer and Information Science, Southwest University – sequence: 4 givenname: Wendong surname: Wang fullname: Wang, Wendong organization: College of Artificial Intelligence, Southwest University – sequence: 5 givenname: Jianjun surname: Wang fullname: Wang, Jianjun organization: School of Mathematics and Statistics, Southwest University – sequence: 6 givenname: Wu surname: Chen fullname: Chen, Wu email: chenwu@swu.edu.cn organization: College of Computer and Information Science, Southwest University |
BookMark | eNp9kE1LAzEQhoMoWGv_gKeA52gm2WbjUYpfUFBQwVvIZrN163ZTk6y2_nqzrSh4aC4ZyPPMTN4jtN-61iJ0AvQMKM3PA3CaMUIZEApcAIE9NAApJJFUvuz_1jkcolEIc5qOoJxTNkDFg_WV8wvdGotnnfa6jdYG7CocUx36N1viR_OqY7QtAeztrGu0r790rF2LE4DtSpuIG_dJkvKGFzr6epVA4z6sXx-jg0o3wY5-7iF6vr56mtyS6f3N3eRySgyX40g4o0aborKVGEtOC8ZzGDOQWkowgrGClZIXALrHGRNCizLTNBMllMJWwIfodNt36d17Z0NUc9f5No1U7IJRkBnPZaLYljLeheBtpZa-Xmi_VkBVH6faxqlSnGoTp-pby3-SqePm_ymkutmt8q0a0px2Zv3fVjusb97KjBI |
CitedBy_id | crossref_primary_10_1016_j_sigpro_2023_108959 crossref_primary_10_1587_transinf_2023EDP7265 crossref_primary_10_1155_2022_2054546 crossref_primary_10_1016_j_knosys_2024_112538 crossref_primary_10_1016_j_eswa_2023_119977 crossref_primary_10_1016_j_ins_2023_119699 crossref_primary_10_1155_2023_8813500 crossref_primary_10_1109_TCSVT_2024_3382306 crossref_primary_10_1186_s13634_023_01027_w crossref_primary_10_1109_LGRS_2023_3307411 |
Cites_doi | 10.1007/s10589-017-9898-5 10.1214/09-AOS729 10.1007/s10107-009-0306-5 10.1137/090771806 10.1007/s00041-008-9045-x 10.1137/080738970 10.1016/j.sigpro.2020.107510 10.1007/s10107-018-1236-x 10.1109/TCYB.2017.2685521 10.1561/2400000003 10.1109/CVPR.2014.366 10.4310/CMS.2017.v15.n2.a9 10.1109/TIP.2014.2363734 10.1137/070697835 10.1145/1401890.1401944 10.1016/j.neucom.2017.09.052 10.1109/TNNLS.2015.2490080 10.1198/016214501753382273 10.1109/TIP.2015.2511584 10.1109/TNNLS.2015.2415257 10.1016/j.neucom.2018.05.092 10.1109/TPAMI.2017.2677440 10.1016/j.neucom.2018.10.065 10.1109/TPAMI.2018.2858249 10.1016/j.neucom.2017.05.074 10.1007/s13042-020-01121-7 10.1109/ICDM.2015.15 10.1137/130934271 10.1609/aaai.v32i1.11802 10.4310/CMS.2017.v15.n3.a12 10.1109/TNNLS.2021.3059711 10.1049/el:20080522 10.1109/TSP.2016.2586753 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021. |
DBID | AAYXX CITATION 8FE 8FG ABJCF AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- L6V M7S P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PTHSS |
DOI | 10.1007/s13042-021-01361-1 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database ProQuest Engineering Collection Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition Engineering collection |
DatabaseTitle | CrossRef Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection SciTech Premium Collection ProQuest One Community College ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Engineering Collection Advanced Technologies & Aerospace Collection Engineering Database ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition Materials Science & Engineering Collection ProQuest One Academic ProQuest One Academic (New) |
DatabaseTitleList | Computer Science Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Sciences (General) |
EISSN | 1868-808X |
EndPage | 3395 |
ExternalDocumentID | 10_1007_s13042_021_01361_1 |
GrantInformation_xml | – fundername: Fundamental Research Funds for the Central Universities grantid: SWU120036 funderid: http://dx.doi.org/10.13039/501100012226 – fundername: National Natural Science Foundation of China grantid: 11901476 funderid: http://dx.doi.org/10.13039/501100001809 |
GroupedDBID | -EM 06D 0R~ 0VY 1N0 203 29~ 2JY 2VQ 30V 4.4 406 408 409 40D 96X AACDK AAHNG AAIAL AAJBT AAJKR AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH AAZMS ABAKF ABBXA ABDZT ABECU ABFTD ABFTV ABHQN ABJCF ABJNI ABJOX ABKCH ABMQK ABQBU ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACDTI ACGFS ACHSB ACKNC ACMLO ACOKC ACPIV ACZOJ ADHHG ADHIR ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFQL AEGNC AEJHL AEJRE AEMSY AENEX AEOHA AEPYU AESKC AETCA AEVLU AEXYK AFBBN AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGJBK AGMZJ AGQEE AGQMX AGRTI AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ AKLTO ALFXC ALMA_UNASSIGNED_HOLDINGS AMKLP AMXSW AMYLF AMYQR ANMIH ARAPS AUKKA AXYYD AYJHY BENPR BGLVJ BGNMA CCPQU CSCUP DNIVK DPUIP EBLON EBS EIOEI EJD ESBYG FERAY FIGPU FINBP FNLPD FRRFC FSGXE FYJPI GGCAI GGRSB GJIRD GQ6 GQ7 GQ8 H13 HCIFZ HMJXF HQYDN HRMNR HZ~ I0C IKXTQ IWAJR IXD IZIGR J-C J0Z JBSCW JCJTX JZLTJ K7- KOV LLZTM M4Y M7S NPVJJ NQJWS NU0 O9- O93 O9J P2P P9P PT4 PTHSS QOS R89 R9I RLLFE ROL RSV S27 S3B SEG SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE T13 TSG U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W48 WK8 Z45 Z7X Z83 Z88 ZMTXR ~A9 AAYXX ABBRH ABDBE ABFSG ACSTC ADKFA AEZWR AFDZB AFHIU AFOHR AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 8FE 8FG ABRTQ AZQEC DWQXO GNUQQ JQ2 L6V P62 PKEHL PQEST PQGLB PQQKQ PQUKI |
ID | FETCH-LOGICAL-c385t-320cacbfef65830b23715218a881c622b2d83b11ac3852266a6d4a046d1d6ef13 |
IEDL.DBID | U2A |
ISSN | 1868-8071 |
IngestDate | Sun Jul 13 05:12:12 EDT 2025 Thu Apr 24 23:12:33 EDT 2025 Tue Jul 01 03:51:01 EDT 2025 Fri Feb 21 02:47:24 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Keywords | Nonconvex model Equivalence Transformed Schatten-1 penalty function Low-rank matrix recovery |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c385t-320cacbfef65830b23715218a881c622b2d83b11ac3852266a6d4a046d1d6ef13 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-2167-830X |
PQID | 2920184378 |
PQPubID | 2043904 |
PageCount | 17 |
ParticipantIDs | proquest_journals_2920184378 crossref_primary_10_1007_s13042_021_01361_1 crossref_citationtrail_10_1007_s13042_021_01361_1 springer_journals_10_1007_s13042_021_01361_1 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-12-01 |
PublicationDateYYYYMMDD | 2021-12-01 |
PublicationDate_xml | – month: 12 year: 2021 text: 2021-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationTitle | International journal of machine learning and cybernetics |
PublicationTitleAbbrev | Int. J. Mach. Learn. & Cyber |
PublicationYear | 2021 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Gu S, Zhang L, Zuo W, Feng X (2014) Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2862–2869 ThuQGhanbariMScope of validity of PSNR in image/video quality assessmentElectron Lett2008441380080110.1049/el:20080522 Liu G, Liu Q, Yuan X (2017) A new theory for matrix completion. In: Proceedings of the advances in neural information processing systems, pp 785–794 Wang Z, Liu Y, Luo X, Wang J, Gao C, Peng D, Chen W (2021) Large-scale affine matrix rank minimization with a novel nonconvex regularizer. IEEE Trans Neural Netw Learn Syst (to be published) HalkoNMartinssonPTroppJFinding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositionsSIAM Rev2011532217288280663710.1137/090771806 WangZWangWWangJChenSFast and efficient algorithm for matrix completion via closed-form 2/3-thresholding operatorNeurocomputing201933021222210.1016/j.neucom.2018.10.065 LuoXZhouMLiSXiaYYouZZhuQLeungHIncorporation of efficient second-order solvers into latent factor models for accurate prediction of missing QoS dataIEEE Trans Cybern20184841216122810.1109/TCYB.2017.2685521 ParikhNBoydSProximal algorithmsFound Trends Optim20141312723910.1561/2400000003 MaSGoldfarbDChenLFixed point and Bregman iterative methods for matrix rank minimizationMath Progr20111281–2321353281096110.1007/s10107-009-0306-5 CuiAPengJLiHExact recovery low-rank matrix via transformed affine matrix rank minimizationNeurocomputing201831911210.1016/j.neucom.2018.05.092 RechtBFazelMParriloPGuaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimizationSIAM Rev2010523471501268054310.1137/070697835 ChenBYangZYangZAn algorithm for low-rank matrix factorization and its applicationsNeurocomputing20182751012102010.1016/j.neucom.2017.09.052 ZhangSXinJMinimization of transformed L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document} penalty: closed form representation and iterative thresholding algorithmsCommun Math Sci2017152511537362056710.4310/CMS.2017.v15.n2.a9 Fazel M (2002) Matrix rank minimization with applications. Ph.D. thesis, Stanford University Kang Z, Peng C, Cheng Q (2015) Robust PCA via nonconvex rank approximation. In: Proceedings of IEEE international conference on data mining, pp 211–220 LvJFanYA unified approach to model selection and sparse recovery using regularized least squaresAnn Stat20093763498352825495671369.62156 Yao Q, Kwok J (2015) Accelerated inexact soft-impute for fast large scale matrix completion. In: Proceedings of the international joint conference on artificial intelligence, pp 4002–4008 Li Q, Zhou Y, Liang Y, Varshney P (2017) Convergence analysis of proximal gradient with momentum for nonconvex optimization. In: Proceedings of the 34th international conference on machine learning, pp 2111–2119 OhTMatsushitaYTaiYKweonIFast randomized singular value thresholding for low-rank optimizationIEEE Trans Pattern Anal Mach Intell201840237639110.1109/TPAMI.2017.2677440 Schmidt M, Roux N, Bach F (2011) Convergence rates of inexact proximal gradient methods for convex optimization. In: Proceedings of the advances in neural information processing systems, pp 1458–1466 NesterovYA method for solving the convex programming problem with convergence rate O(1/k2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(1/k^{2})$$\end{document}Dokl Akad Nauk SSSR1983272543547 WangZGaoCLuoXTangMWangJChenWAccelerated inexact matrix completion algorithm via closed-form q-thresholding (q=1/2,2/3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(q=1/2,2/3)$$\end{document} operatorInt J Mach Learn Cybern2020112327233910.1007/s13042-020-01121-7 YaoQKwokJWangTLiuTLarge-scale low-rank matrix learning with nonconvex regularizersIEEE Trans Pattern Anal Mach Intell201941112628264310.1109/TPAMI.2018.2858249 TohK-CYunSAn accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problemsPac. J Optim20106361564027430471205.90218 Gu B, Huo Z, Huang H (2018) Inexact proximal gradient methods for non-convex and non-smooth optimization. In: Proceedings of the twenty-second AAAI conference on artificial intelligence, pp 3093–3100 ZhangSXinJMinimization of transformed L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document} penalty: theory, difference of convex function algorithm, and robust application in compressed sensingMath Progr20181691–230733610.1007/s10107-018-1236-x WangZLaiMLuZFanWDavulcuHYeJOrthogonal rank-one matrix pursuit for low rank matrix completionSIAM J Sci Comput2015371A488A514331383210.1137/130934271 CaiJ-FCandèsEJShenZA singular value thresholding algorithm for matrix completionSIAM J Optim201020419561982260024810.1137/080738970 Li H, Lin Z (2015) Accelerated proximal gradient methods for nonconvex programming. In: Proceedings of the advances in neural information processing systems, pp 379–387 FanJChowTDeep learning based matrix completionNeurocomputing201726679180310.1016/j.neucom.2017.05.074 PengDXiuNYuJS1/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{1/2}$$\end{document} regularization methods and fixed point algorithms for affine rank minimization problemsComput Optim Appl201767543569365418510.1007/s10589-017-9898-5 CandèsEJWakinMBoydSEnhancing sparsity by reweighted l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} minimizationJ Fourier Anal Appl200814877905246161110.1007/s00041-008-9045-x PengXZhangYTangHA unified framework for representation-based subspace clustering of out-of-sample and large-scale dataIEEE Trans Neural Netw Learn Syst2016271224992512357931310.1109/TNNLS.2015.2490080 LuoXZhouMLiSYouZXiaYZhuQA non-negative latent factor model for large-scale sparse matrices in recommender systems via alternating direction methodIEEE Trans Neural Netw Learn Syst2016273579592346565810.1109/TNNLS.2015.2415257 LiuGLiuQLiPLow-rank matrix completion in the presence of high coherenceIEEE Trans Signal Process2016642156235633354875610.1109/TSP.2016.2586753 LuCTangJYanSLinZNonconvex nonsmooth low rank minimization via iteratively reweighted nuclear normIEEE Trans Image Process2016251829839345544910.1109/TIP.2015.2511584 ZhaoFPengJCuiADesign strategy of thresholding operator for low-rank matrix recovery problemSignal Process202017111010.1016/j.sigpro.2020.107510 HuangCDingXFangCWenDRobust image restoration via adaptive low-rank approximation and joint kernel regressionIEEE Trans Image Process2014231252845297327507010.1109/TIP.2014.2363734 ZhangCNearly unbiased variable selection under minimax concave penaltyAnn Stat2010382894942260470110.1214/09-AOS729 FanJLiRVariable selection via nonconcal penalized likelihood and its oracle propertiesJ Am Stat Assoc2001964561348136010.1198/016214501753382273 Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 426–434 ZhangSYinPXinJTransformed Schatten-1 iterative thresholding algorithms for low rank matrix completionCommun Math Sci2017153839862362060610.4310/CMS.2017.v15.n3.a12 X Luo (1361_CR2) 2016; 27 1361_CR19 1361_CR18 F Zhao (1361_CR5) 2020; 171 1361_CR15 N Parikh (1361_CR32) 2014; 1 S Zhang (1361_CR30) 2018; 169 1361_CR11 1361_CR33 J-F Cai (1361_CR13) 2010; 20 1361_CR35 Z Wang (1361_CR36) 2015; 37 C Huang (1361_CR3) 2014; 23 1361_CR34 Q Yao (1361_CR25) 2019; 41 J Fan (1361_CR20) 2001; 96 EJ Candès (1361_CR22) 2008; 14 S Zhang (1361_CR26) 2017; 15 Z Wang (1361_CR16) 2019; 330 Z Wang (1361_CR17) 2020; 11 T Oh (1361_CR38) 2018; 40 D Peng (1361_CR23) 2017; 67 C Lu (1361_CR24) 2016; 25 A Cui (1361_CR31) 2018; 319 J Lv (1361_CR27) 2009; 37 S Zhang (1361_CR29) 2017; 15 X Luo (1361_CR6) 2018; 48 G Liu (1361_CR10) 2016; 64 1361_CR28 N Halko (1361_CR37) 2011; 53 B Recht (1361_CR12) 2010; 52 1361_CR40 1361_CR9 C Zhang (1361_CR21) 2010; 38 B Chen (1361_CR4) 2018; 275 S Ma (1361_CR41) 2011; 128 X Peng (1361_CR8) 2016; 27 Y Nesterov (1361_CR39) 1983; 27 K-C Toh (1361_CR14) 2010; 6 1361_CR1 Q Thu (1361_CR42) 2008; 44 J Fan (1361_CR7) 2017; 266 |
References_xml | – reference: RechtBFazelMParriloPGuaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimizationSIAM Rev2010523471501268054310.1137/070697835 – reference: WangZWangWWangJChenSFast and efficient algorithm for matrix completion via closed-form 2/3-thresholding operatorNeurocomputing201933021222210.1016/j.neucom.2018.10.065 – reference: WangZGaoCLuoXTangMWangJChenWAccelerated inexact matrix completion algorithm via closed-form q-thresholding (q=1/2,2/3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(q=1/2,2/3)$$\end{document} operatorInt J Mach Learn Cybern2020112327233910.1007/s13042-020-01121-7 – reference: Fazel M (2002) Matrix rank minimization with applications. Ph.D. thesis, Stanford University – reference: PengXZhangYTangHA unified framework for representation-based subspace clustering of out-of-sample and large-scale dataIEEE Trans Neural Netw Learn Syst2016271224992512357931310.1109/TNNLS.2015.2490080 – reference: ZhangSXinJMinimization of transformed L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document} penalty: theory, difference of convex function algorithm, and robust application in compressed sensingMath Progr20181691–230733610.1007/s10107-018-1236-x – reference: ChenBYangZYangZAn algorithm for low-rank matrix factorization and its applicationsNeurocomputing20182751012102010.1016/j.neucom.2017.09.052 – reference: TohK-CYunSAn accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problemsPac. J Optim20106361564027430471205.90218 – reference: Li Q, Zhou Y, Liang Y, Varshney P (2017) Convergence analysis of proximal gradient with momentum for nonconvex optimization. In: Proceedings of the 34th international conference on machine learning, pp 2111–2119 – reference: ZhaoFPengJCuiADesign strategy of thresholding operator for low-rank matrix recovery problemSignal Process202017111010.1016/j.sigpro.2020.107510 – reference: ParikhNBoydSProximal algorithmsFound Trends Optim20141312723910.1561/2400000003 – reference: LuCTangJYanSLinZNonconvex nonsmooth low rank minimization via iteratively reweighted nuclear normIEEE Trans Image Process2016251829839345544910.1109/TIP.2015.2511584 – reference: ThuQGhanbariMScope of validity of PSNR in image/video quality assessmentElectron Lett2008441380080110.1049/el:20080522 – reference: ZhangSXinJMinimization of transformed L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document} penalty: closed form representation and iterative thresholding algorithmsCommun Math Sci2017152511537362056710.4310/CMS.2017.v15.n2.a9 – reference: LuoXZhouMLiSXiaYYouZZhuQLeungHIncorporation of efficient second-order solvers into latent factor models for accurate prediction of missing QoS dataIEEE Trans Cybern20184841216122810.1109/TCYB.2017.2685521 – reference: LuoXZhouMLiSYouZXiaYZhuQA non-negative latent factor model for large-scale sparse matrices in recommender systems via alternating direction methodIEEE Trans Neural Netw Learn Syst2016273579592346565810.1109/TNNLS.2015.2415257 – reference: Gu B, Huo Z, Huang H (2018) Inexact proximal gradient methods for non-convex and non-smooth optimization. In: Proceedings of the twenty-second AAAI conference on artificial intelligence, pp 3093–3100 – reference: CaiJ-FCandèsEJShenZA singular value thresholding algorithm for matrix completionSIAM J Optim201020419561982260024810.1137/080738970 – reference: Kang Z, Peng C, Cheng Q (2015) Robust PCA via nonconvex rank approximation. In: Proceedings of IEEE international conference on data mining, pp 211–220 – reference: MaSGoldfarbDChenLFixed point and Bregman iterative methods for matrix rank minimizationMath Progr20111281–2321353281096110.1007/s10107-009-0306-5 – reference: LiuGLiuQLiPLow-rank matrix completion in the presence of high coherenceIEEE Trans Signal Process2016642156235633354875610.1109/TSP.2016.2586753 – reference: Li H, Lin Z (2015) Accelerated proximal gradient methods for nonconvex programming. In: Proceedings of the advances in neural information processing systems, pp 379–387 – reference: Yao Q, Kwok J (2015) Accelerated inexact soft-impute for fast large scale matrix completion. In: Proceedings of the international joint conference on artificial intelligence, pp 4002–4008 – reference: NesterovYA method for solving the convex programming problem with convergence rate O(1/k2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(1/k^{2})$$\end{document}Dokl Akad Nauk SSSR1983272543547 – reference: Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 426–434 – reference: FanJLiRVariable selection via nonconcal penalized likelihood and its oracle propertiesJ Am Stat Assoc2001964561348136010.1198/016214501753382273 – reference: LvJFanYA unified approach to model selection and sparse recovery using regularized least squaresAnn Stat20093763498352825495671369.62156 – reference: YaoQKwokJWangTLiuTLarge-scale low-rank matrix learning with nonconvex regularizersIEEE Trans Pattern Anal Mach Intell201941112628264310.1109/TPAMI.2018.2858249 – reference: OhTMatsushitaYTaiYKweonIFast randomized singular value thresholding for low-rank optimizationIEEE Trans Pattern Anal Mach Intell201840237639110.1109/TPAMI.2017.2677440 – reference: CandèsEJWakinMBoydSEnhancing sparsity by reweighted l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} minimizationJ Fourier Anal Appl200814877905246161110.1007/s00041-008-9045-x – reference: CuiAPengJLiHExact recovery low-rank matrix via transformed affine matrix rank minimizationNeurocomputing201831911210.1016/j.neucom.2018.05.092 – reference: ZhangCNearly unbiased variable selection under minimax concave penaltyAnn Stat2010382894942260470110.1214/09-AOS729 – reference: WangZLaiMLuZFanWDavulcuHYeJOrthogonal rank-one matrix pursuit for low rank matrix completionSIAM J Sci Comput2015371A488A514331383210.1137/130934271 – reference: PengDXiuNYuJS1/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{1/2}$$\end{document} regularization methods and fixed point algorithms for affine rank minimization problemsComput Optim Appl201767543569365418510.1007/s10589-017-9898-5 – reference: HalkoNMartinssonPTroppJFinding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositionsSIAM Rev2011532217288280663710.1137/090771806 – reference: HuangCDingXFangCWenDRobust image restoration via adaptive low-rank approximation and joint kernel regressionIEEE Trans Image Process2014231252845297327507010.1109/TIP.2014.2363734 – reference: Wang Z, Liu Y, Luo X, Wang J, Gao C, Peng D, Chen W (2021) Large-scale affine matrix rank minimization with a novel nonconvex regularizer. IEEE Trans Neural Netw Learn Syst (to be published) – reference: ZhangSYinPXinJTransformed Schatten-1 iterative thresholding algorithms for low rank matrix completionCommun Math Sci2017153839862362060610.4310/CMS.2017.v15.n3.a12 – reference: Gu S, Zhang L, Zuo W, Feng X (2014) Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2862–2869 – reference: FanJChowTDeep learning based matrix completionNeurocomputing201726679180310.1016/j.neucom.2017.05.074 – reference: Schmidt M, Roux N, Bach F (2011) Convergence rates of inexact proximal gradient methods for convex optimization. In: Proceedings of the advances in neural information processing systems, pp 1458–1466 – reference: Liu G, Liu Q, Yuan X (2017) A new theory for matrix completion. In: Proceedings of the advances in neural information processing systems, pp 785–794 – volume: 67 start-page: 543 year: 2017 ident: 1361_CR23 publication-title: Comput Optim Appl doi: 10.1007/s10589-017-9898-5 – volume: 38 start-page: 894 issue: 2 year: 2010 ident: 1361_CR21 publication-title: Ann Stat doi: 10.1214/09-AOS729 – volume: 128 start-page: 321 issue: 1–2 year: 2011 ident: 1361_CR41 publication-title: Math Progr doi: 10.1007/s10107-009-0306-5 – volume: 53 start-page: 217 issue: 2 year: 2011 ident: 1361_CR37 publication-title: SIAM Rev doi: 10.1137/090771806 – volume: 14 start-page: 877 year: 2008 ident: 1361_CR22 publication-title: J Fourier Anal Appl doi: 10.1007/s00041-008-9045-x – volume: 20 start-page: 1956 issue: 4 year: 2010 ident: 1361_CR13 publication-title: SIAM J Optim doi: 10.1137/080738970 – volume: 171 start-page: 1 year: 2020 ident: 1361_CR5 publication-title: Signal Process doi: 10.1016/j.sigpro.2020.107510 – ident: 1361_CR11 – volume: 169 start-page: 307 issue: 1–2 year: 2018 ident: 1361_CR30 publication-title: Math Progr doi: 10.1007/s10107-018-1236-x – volume: 48 start-page: 1216 issue: 4 year: 2018 ident: 1361_CR6 publication-title: IEEE Trans Cybern doi: 10.1109/TCYB.2017.2685521 – volume: 1 start-page: 127 issue: 3 year: 2014 ident: 1361_CR32 publication-title: Found Trends Optim doi: 10.1561/2400000003 – ident: 1361_CR19 doi: 10.1109/CVPR.2014.366 – volume: 15 start-page: 511 issue: 2 year: 2017 ident: 1361_CR29 publication-title: Commun Math Sci doi: 10.4310/CMS.2017.v15.n2.a9 – volume: 37 start-page: 3498 issue: 6 year: 2009 ident: 1361_CR27 publication-title: Ann Stat – volume: 23 start-page: 5284 issue: 12 year: 2014 ident: 1361_CR3 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2014.2363734 – volume: 52 start-page: 471 issue: 3 year: 2010 ident: 1361_CR12 publication-title: SIAM Rev doi: 10.1137/070697835 – ident: 1361_CR15 – ident: 1361_CR40 – ident: 1361_CR1 doi: 10.1145/1401890.1401944 – volume: 275 start-page: 1012 year: 2018 ident: 1361_CR4 publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.09.052 – volume: 27 start-page: 2499 issue: 12 year: 2016 ident: 1361_CR8 publication-title: IEEE Trans Neural Netw Learn Syst doi: 10.1109/TNNLS.2015.2490080 – volume: 6 start-page: 615 issue: 3 year: 2010 ident: 1361_CR14 publication-title: Pac. J Optim – volume: 96 start-page: 1348 issue: 456 year: 2001 ident: 1361_CR20 publication-title: J Am Stat Assoc doi: 10.1198/016214501753382273 – volume: 27 start-page: 543 issue: 2 year: 1983 ident: 1361_CR39 publication-title: Dokl Akad Nauk SSSR – volume: 25 start-page: 829 issue: 1 year: 2016 ident: 1361_CR24 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2015.2511584 – volume: 27 start-page: 579 issue: 3 year: 2016 ident: 1361_CR2 publication-title: IEEE Trans Neural Netw Learn Syst doi: 10.1109/TNNLS.2015.2415257 – volume: 319 start-page: 1 year: 2018 ident: 1361_CR31 publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.05.092 – volume: 40 start-page: 376 issue: 2 year: 2018 ident: 1361_CR38 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2017.2677440 – volume: 330 start-page: 212 year: 2019 ident: 1361_CR16 publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.10.065 – ident: 1361_CR35 – volume: 41 start-page: 2628 issue: 11 year: 2019 ident: 1361_CR25 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2018.2858249 – volume: 266 start-page: 791 year: 2017 ident: 1361_CR7 publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.05.074 – volume: 11 start-page: 2327 year: 2020 ident: 1361_CR17 publication-title: Int J Mach Learn Cybern doi: 10.1007/s13042-020-01121-7 – ident: 1361_CR28 doi: 10.1109/ICDM.2015.15 – ident: 1361_CR33 – volume: 37 start-page: A488 issue: 1 year: 2015 ident: 1361_CR36 publication-title: SIAM J Sci Comput doi: 10.1137/130934271 – ident: 1361_CR9 – ident: 1361_CR34 doi: 10.1609/aaai.v32i1.11802 – volume: 15 start-page: 839 issue: 3 year: 2017 ident: 1361_CR26 publication-title: Commun Math Sci doi: 10.4310/CMS.2017.v15.n3.a12 – ident: 1361_CR18 doi: 10.1109/TNNLS.2021.3059711 – volume: 44 start-page: 800 issue: 13 year: 2008 ident: 1361_CR42 publication-title: Electron Lett doi: 10.1049/el:20080522 – volume: 64 start-page: 5623 issue: 21 year: 2016 ident: 1361_CR10 publication-title: IEEE Trans Signal Process doi: 10.1109/TSP.2016.2586753 |
SSID | ssj0000603302 ssib031263576 ssib033405570 |
Score | 2.297634 |
Snippet | Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 3379 |
SubjectTerms | Algorithms Artificial Intelligence Complex Systems Computational Intelligence Control Convergence Engineering Guarantees Image processing Mathematical analysis Mechatronics Optimization Original Article Pattern Recognition Penalty function Recommender systems Recovery Regularization Regularization methods Robotics Systems Biology |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV07T8MwELagXVgQ5SEKBXlgAIFFHLuJmRCgVhUSVQVU6hbFj8BQktIGUf49vsRpBBJsUXLxcA_fnX33HUIngps4pjog3JoD4QllRIqEkqvExvdBCD4SEsWHYTAY8_tJd-IO3BaurLLaE4uNWmcKzsgvYaoSzCYJxfXsncDUKLhddSM01lHTbsFCNFDztjccPVYaxShgrdQOlzFeYE6tTmG8wL4rCxNFIACZl7rOmrK_DpJ9AlUMAG1GCf3pveqQ9NctauGc-lto00WV-KZUgxZaM-k2ajm7XeBTBy59toPkqO4UwC9WP4CzliRLcF7FsEbjJ_UKwJspoXheTKufu35NbAmwWcYqx9Psk8DId_wGMP9LDLm1NYyvXTTu957vBsTNWSCKiW5OmO-pWMnEJDYcYZ70WQheXcRCUBX4vvS1YJLSGMhtuBbEgeaxTay1FbKx4t1DjTRLzT7CnCqqmVGAtMil15XdRApOQ20fId5oI1rxL1IOhBxmYUyjGj4ZeB5ZnkcFzyPaRuerf2YlBMe_1J1KLJEzx0VUK08bXVSiqj__vdrB_6sdog0ftKMob-mgRj7_MEc2SMnlsdPEb_pG3TQ priority: 102 providerName: ProQuest |
Title | Performance guarantees of transformed Schatten-1 regularization for exact low-rank matrix recovery |
URI | https://link.springer.com/article/10.1007/s13042-021-01361-1 https://www.proquest.com/docview/2920184378 |
Volume | 12 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDLZ4XOCAeIrxmHLgAIJIS5N24bihDQRiQsAkOFVNmsIBNrQVAf8eu0spIEDilKp1Wyl26s-N_RlgRyuXJCKNuMLlwFUmJDc6E_wwQ3wfNclHUqB43otO-ur0JrzxRWHjMtu93JIsvtRVsRtF3pxSCohnTHCMeWZDjN0pkasftEorkoL4VSonK6UqeKY-_rw0Ijw3SUbUkSY2XuGraX5-zVePVcHQbzunhUPqLsKCR5KsNVH9Eky5wTLMf-IXXIYlv3LHbNfTS--tgLmoagXYHVoIzS2KDDOWlyjWpezK3hP15oALNir61Y98xSZDAeZeE5uzh-ELp6bv7JGI_l8ZRde4NN5Wod_tXB-dcN9pgVupw5zLoGETazKXISCRDRPIJvl1nWgtbBQEJki1NEIkJI6ALUqiVCUYWqeoZocKXoOZwXDg1oEpYUUqnSWuRWUaoQkzo5VopnhIiKMGopzN2HoacuqG8RBXBMqkgRg1EBcaiEUN9j_ueZqQcPwpvVUqKfYLchxTUy5qbdPUNTgoFVdd_v1pG_8T34S5gGynSHjZgpl89Oy2Ebbkpg7Tuntch9lWt93u0Xh8e9bBsd3pXVzWCxt-BxpZ4qY |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8MwDLZ4HOCCGA8xnjmABIKIpcm6cEAIAWM8hpAAiVtp0hQOsAErgv0pfiN2H6tAghu3qnUjxY_YSezPAKtauTAUkc8VmgNXsZDc6FjwnRjje79BPpI2iu0Lv3WjTm_rt0PwWdTCUFplsSamC3XUtXRGvk1dlag3SUPvPb9w6hpFt6tFC41MLc5c_x23bL3dk0OU75rnNY-uD1o87yrArdT1hEuvZkNrYhej85U148kG-TAdai2s73nGi7Q0QoREjsGJH_qRCnEbGeGUHE4Gxx2GUSXRk1NlevO40F8pCNmldO9SqhThanDmU_PxXZYGqX1NOMAir-PJqvnoaIFTzgQBqQkuvvvKMgD-cWebusLmJEzkMSzbz5SuAkOuMwWVfJXosfUcynpjGsxlWZfA7lEbSY5I0o1ZUkTMLmJX9oFgPjtcsFd3T4mxeXUoQwLmPkKbsMfuO6cG8-yJmgp8MNrJoxn2Z-DmX_g_CyOdbsfNAVPCikg6S7iOytTqph4brUQjwkeKbqogCv4FNoc8p84bj0EJ1kw8D5DnQcrzQFRhc_DPcwb48Sf1YiGWIDf-XlCqahW2ClGVn38fbf7v0VZgrHXdPg_OTy7OFmDcI01JE2sWYSR5fXNLGB4lZjnVSQZ3_20EXzKIF40 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fS8MwEA6iIPogbipOp-bBB0XDliZr4-NQx_w1BjrYW2nSVB9mN7aK87_3rmvXKSr4VtpLC7k77rvm7jtCjpW0QcBDl0lwByYjLphWEWcXEeB718MYiYniQ8dt9-Rtv9Ff6OJPq93zI8lZTwOyNMVJbRRGtaLxDbNwhuUFyDnGGeQ_KxK7gcGie04ztyjBkWulCLhCyJRzav4Xpu7CvVlhonIVMvPyrLPm5898jV4FJP12ipoGp9Ym2chQJW3OzKBElmxcJusLXINlUsq8eEJPMqrp0y2iu0XfAH0Ga8F9BpFhRJMc0dqQPpoXpOGMGafjdHb9OOvepCBA7TQwCR0M3xkOgKevSPo_pZhpg5t8bJNe6_rpss2yqQvMCNVIGGyiCYyObATgRNS1IzyM8SpQihvXcbQTKqE5D1AcwJsbuKEMIM0OQeUWlL1DluNhbHcJldzwUFiDvItS1xu6EWkluRfCJaKPCuH5bvomoyTHyRgDvyBTRg34oAE_1YDPK-RsvmY0I-T4U7qaK8nPnHPi44AuHHPjqQo5zxVXPP79bXv_Ez8iq92rln9_07nbJ2sOmlFaB1Mly8n4zR4Amkn0YWqwn7Qy5Dw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Performance+guarantees+of+transformed+Schatten-1+regularization+for+exact+low-rank+matrix+recovery&rft.jtitle=International+journal+of+machine+learning+and+cybernetics&rft.au=Wang%2C+Zhi&rft.au=Hu%2C+Dong&rft.au=Luo%2C+Xiaohu&rft.au=Wang%2C+Wendong&rft.date=2021-12-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=1868-8071&rft.eissn=1868-808X&rft.volume=12&rft.issue=12&rft.spage=3379&rft.epage=3395&rft_id=info:doi/10.1007%2Fs13042-021-01361-1&rft.externalDocID=10_1007_s13042_021_01361_1 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1868-8071&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1868-8071&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1868-8071&client=summon |