Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In...
Saved in:
Published in | Mathematical programming Vol. 185; no. 1-2; pp. 1 - 35 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.01.2021
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 0025-5610 1436-4646 |
DOI | 10.1007/s10107-019-01420-0 |
Cover
Loading…
Abstract | On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods. |
---|---|
AbstractList | On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods. On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate O(1 / t) to O(1/t2), and in addition, O(1 / t) is optimal for convex problems. Moreover, we prove that for strongly convex problems, O(1/t2) is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods. |
Author | Ouyang, Yuyuan Xu, Yangyang |
Author_xml | – sequence: 1 givenname: Yuyuan surname: Ouyang fullname: Ouyang, Yuyuan organization: School of Mathematical and Statistical Sciences, Clemson University – sequence: 2 givenname: Yangyang orcidid: 0000-0002-4163-3723 surname: Xu fullname: Xu, Yangyang email: xuy21@rpi.edu organization: Department of Mathematical Sciences, Rensselaer Polytechnic Institute |
BookMark | eNp9kE1LAzEQhoMoWKt_wNOC5-gku5ukRxG_oOCl95jNzmrKNqlJ6se_N7WC4KEMw0DyvPMO7wk59MEjIecMLhmAvEoMGEgKbFa64UDhgExYUwvaiEYckgkAb2krGByTk5SWAMBqpSbkeR4-MFY2rNYjfrr8VXVh4_tUhaEaXEyZhtgXYIX5NZTnIWxh_46ftAxr3rHq3Og8mlgl0_cj0nVwPlfrGLoRV-mUHA1mTHj2O6dkcXe7uHmg86f7x5vrObV1IzPtaiV6GKwxsp0hKjkoYaFRAgQyy_qey6E2vHxx24GwiC0XSppWzBQqqKfkYre2-L5tMGW9DJvoi6PmjVSlZpzvpbgoRCvbplB8R9kYUoo46HV0KxO_NAO9jVvv4tYlbv0Tt94eoP6JrMsmu-BzNG7cL6130lR8_AvGv6v2qL4BSwiW2g |
CitedBy_id | crossref_primary_10_1090_mcom_4063 crossref_primary_10_1137_18M1213488 crossref_primary_10_1007_s10208_023_09636_5 crossref_primary_10_1137_21M1462775 crossref_primary_10_1007_s10957_024_02394_6 crossref_primary_10_1007_s10589_022_00365_z crossref_primary_10_1080_10556788_2021_1924714 crossref_primary_10_1080_10556788_2023_2280062 crossref_primary_10_1007_s11590_021_01775_4 crossref_primary_10_1137_23M1568168 crossref_primary_10_1137_20M1371579 crossref_primary_10_1137_22M1504597 crossref_primary_10_1007_s10589_023_00518_8 crossref_primary_10_1007_s10957_022_02062_7 crossref_primary_10_1109_LCSYS_2024_3401019 crossref_primary_10_1360_SCM_2024_0063 crossref_primary_10_1109_TCSS_2024_3411630 crossref_primary_10_1137_22M1469584 crossref_primary_10_1007_s10107_021_01660_z crossref_primary_10_1134_S0965542520110020 crossref_primary_10_1007_s10589_024_00638_9 crossref_primary_10_1016_j_chaos_2024_115418 crossref_primary_10_1137_19M1290115 crossref_primary_10_1007_s10107_023_01976_y crossref_primary_10_1007_s10107_019_01425_9 crossref_primary_10_1109_LCSYS_2020_3044977 crossref_primary_10_1137_21M1468668 crossref_primary_10_1137_23M159473X crossref_primary_10_1287_moor_2022_1343 crossref_primary_10_1109_OJSP_2023_3344395 crossref_primary_10_1137_21M1451944 crossref_primary_10_1137_22M1470104 crossref_primary_10_1287_ijoc_2022_0200 crossref_primary_10_1137_21M1395302 crossref_primary_10_1137_21M1395764 crossref_primary_10_1287_opre_2022_0110 crossref_primary_10_1137_22M1505475 crossref_primary_10_1109_TAC_2021_3130082 crossref_primary_10_1007_s10107_022_01919_z crossref_primary_10_1287_ijoo_2019_0033 crossref_primary_10_1007_s10107_024_02073_4 crossref_primary_10_1007_s10107_024_02075_2 crossref_primary_10_1007_s11425_023_2296_5 crossref_primary_10_1137_22M1485309 crossref_primary_10_1007_s10915_022_01819_6 crossref_primary_10_5802_ojmo_26 crossref_primary_10_1007_s10107_024_02128_6 |
Cites_doi | 10.1007/s10107-013-0677-5 10.1137/130919362 10.1007/s10957-012-0245-9 10.1007/s40305-018-0232-4 10.1007/s40305-016-0131-5 10.1007/s10589-017-9972-z 10.1007/s10107-017-1161-4 10.1137/S1052623403425629 10.1007/s10107-012-0629-5 10.1007/s10851-010-0251-1 10.1016/0885-064X(91)90001-E 10.1137/14095697X 10.1007/s10107-017-1173-0 10.1016/0885-064X(92)90013-2 10.1007/s10107-015-0861-x 10.1137/14096757X 10.1137/16M1082305 10.1137/070704277 10.1007/s10107-004-0552-5 10.1137/110849468 10.1137/080716542 10.1016/j.jco.2014.08.003 10.1287/10-SSY010 10.1007/s10107-015-0955-5 10.1137/09076934X 10.1007/978-1-4419-8853-9 10.1137/110836936 10.1137/100801652 10.1137/120896219 10.1007/s10915-018-0680-3 10.1137/140992382 |
ContentType | Journal Article |
Copyright | Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019 Mathematical Programming is a copyright of Springer, (2019). All Rights Reserved. Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019. |
Copyright_xml | – notice: Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019 – notice: Mathematical Programming is a copyright of Springer, (2019). All Rights Reserved. – notice: Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1007/s10107-019-01420-0 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Mathematics |
EISSN | 1436-4646 |
EndPage | 35 |
ExternalDocumentID | 10_1007_s10107_019_01420_0 |
GroupedDBID | --K --Z -52 -5D -5G -BR -EM -Y2 -~C -~X .4S .86 .DC .VR 06D 0R~ 0VY 199 1B1 1N0 1OL 1SB 203 28- 29M 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 88I 8AO 8FE 8FG 8FL 8TC 8UJ 8VB 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTV ABHLI ABHQN ABJCF ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACGOD ACHSB ACHXU ACIWK ACKNC ACMDZ ACMLO ACNCT ACOKC ACOMO ACPIV ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMOZ AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHQJS AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ AKVCP ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BAPOH BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EBA EBLON EBR EBS EBU ECS EDO EIOEI EJD EMI EMK EPL ESBYG EST ESX FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ H~9 I-F I09 IAO IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K1G K60 K6V K6~ K7- KDC KOV KOW L6V LAS LLZTM M0C M0N M2P M4Y M7S MA- N2Q N9A NB0 NDZJH NPVJJ NQ- NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P62 P9R PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 PTHSS Q2X QOK QOS QWB R4E R89 R9I RHV RIG RNI RNS ROL RPX RPZ RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCLPG SDD SDH SDM SHX SISQX SJYHP SMT SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TH9 TN5 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WH7 WK8 XPP YLTOR Z45 Z5O Z7R Z7S Z7X Z7Y Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8R Z8T Z8W Z92 ZL0 ZMTXR ZWQNP ~02 ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADXHL AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP AMVHM ATHPR AYFIA CITATION PHGZM PHGZT 7SC 8FD ABRTQ JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c347t-b386d0fcaa759ee87f86c048606e1c1dd27f3a29ee2cb06cee52687a5698e803 |
IEDL.DBID | U2A |
ISSN | 0025-5610 |
IngestDate | Fri Jul 25 19:44:04 EDT 2025 Fri Jul 25 19:58:15 EDT 2025 Thu Apr 24 23:01:44 EDT 2025 Tue Jul 01 02:15:12 EDT 2025 Fri Feb 21 02:32:42 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1-2 |
Keywords | 68Q25 Information-based complexity Convex optimization 90C25 90C06 Lower complexity bound 49M37 90C60 First-order methods Saddle point problems |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c347t-b386d0fcaa759ee87f86c048606e1c1dd27f3a29ee2cb06cee52687a5698e803 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-4163-3723 |
PQID | 2269225754 |
PQPubID | 25307 |
PageCount | 35 |
ParticipantIDs | proquest_journals_2478787922 proquest_journals_2269225754 crossref_primary_10_1007_s10107_019_01420_0 crossref_citationtrail_10_1007_s10107_019_01420_0 springer_journals_10_1007_s10107_019_01420_0 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-01-01 |
PublicationDateYYYYMMDD | 2021-01-01 |
PublicationDate_xml | – month: 01 year: 2021 text: 2021-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationSubtitle | A Publication of the Mathematical Optimization Society |
PublicationTitle | Mathematical programming |
PublicationTitleAbbrev | Math. Program |
PublicationYear | 2021 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | LanGRenatoDMonteiroCIteration-complexity of first-order augmented lagrangian methods for convex programmingMath. Program.20161551–2511547343981110.1007/s10107-015-0861-x Woodworth, B.E., Srebro, N.: Tight complexity bounds for optimizing composite objectives. In: Advances in Neural Information Processing Systems, pp. 3639–3647. (2016) NesterovYIntroductory Lectures on Convex Optimization: A Basic Course2004DordrechtKluwer Academic Publisher10.1007/978-1-4419-8853-9 NemirovskiASInformation-based complexity of linear operator equationsJ. Complexity199282153175116791010.1016/0885-064X(92)90013-2 NesterovYSmooth minimization of non-smooth functionsMath. Program.20051031127152216653710.1007/s10107-004-0552-5 Lan, G.: The complexity of large-scale convex programming under a linear optimization oracle. (2013) arXiv preprint arXiv:1309.5550 Jaggi, M.: Revisiting frank-wolfe: Projection-free sparse convex optimization. In: ICML, vol. 1, pp. 427–435. (2013) XuYAccelerated first-order primal-dual proximal methods for linearly constrained composite convex programmingSIAM J. Optim.201727314591484367932410.1137/16M1082305 Hamedani, E.Y., Aybat, N.S.: A primal-dual algorithm for general convex-concave saddle point problems. (2018) arXiv preprint arXiv:1803.01401 NemirovskiAYudinDProblem Complexity and Method Efficiency in Optimization1983New YorkWiley-Interscience Series in Discrete Mathematics, Wiley NemirovskyAOn optimality of krylov’s information when solving linear operator equationsJ. Complexity199172121130110877210.1016/0885-064X(91)90001-E XuYZhangSAccelerated primal-dual proximal block coordinate updating methods for constrained convex optimizationComput. Optim. Appl.201870191128378046210.1007/s10589-017-9972-z Arjevani, Y., Shamir, O.: On the iteration complexity of oblivious first-order optimization algorithms. In: International Conference on Machine Learning, pp. 908–916. (2016) GaoXZhangS-ZFirst-order algorithms for convex optimization with nonseparable objective and coupled constraintsJ. Oper. Res. Soc. China201752131159366308210.1007/s40305-016-0131-5 NemirovskiAJuditskyALanGShapiroARobust stochastic approximation approach to stochastic programmingSIAM J. Optim.200919415741609248604110.1137/070704277 NesterovYGradient methods for minimizing composite functionsMath. Program.20131401125161307186510.1007/s10107-012-0629-5 YanMA new primal-dual algorithm for minimizing the sum of three functions with a linear operatorJ. Sci. Comput.201876316981717383370710.1007/s10915-018-0680-3 DevolderOGlineurFNesterovYFirst-order methods of smooth convex optimization with inexact oracleMath. Program.20141461–23775323260810.1007/s10107-013-0677-5 MonteiroRDSvaiterBFComplexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problemsSIAM J. Optim.201121416881720286951310.1137/100801652 LanGZhouYAn optimal randomized incremental gradient methodMath. Program.20181711–2167215384453710.1007/s10107-017-1173-0 OuyangYChenYLanGPasiliaoEJrAn accelerated linearized alternating direction method of multipliersSIAM J. Imaging Sci.201581644681332497010.1137/14095697X GuzmánCNemirovskiAOn lower complexity bounds for large-scale smooth convex optimizationJ. Complexity2015311114328454310.1016/j.jco.2014.08.003 NemirovskiAProx-method with rate of convergence O(1/t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}(1/t)$$\end{document} for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problemsSIAM J. Optim.2004151229251211298410.1137/S1052623403425629 LanGZhouYConditional gradient sliding for convex optimizationSIAM J. Optim.201626213791409351687110.1137/140992382 HeYMonteiroRDAn accelerated hpe-type algorithm for a class of composite convex-concave saddle-point problemsSIAM J. Optim.20162612956343977710.1137/14096757X Lan, G., Ouyang, Y.: Accelerated gradient sliding for structured convex optimization. (2016) arXiv preprint arXiv:1609.04905 Simchowitz, M.: On the randomized complexity of minimizing a convex quadratic function. (2018) arXiv preprint arXiv:1807.09386 ChenYLanGOuyangYOptimal primal-dual methods for a class of saddle point problemsSIAM J. Optim.201424417791814327262710.1137/130919362 RockafellarRTConvex Analysis2015PrincetonPrinceton University Press Carmon Y, Duchi, J.C., Hinder, O., Sidford, A.: Lower bounds for finding stationary points II: First-order methods. (2017) arXiv preprint arXiv:1711.00841 LanGGradient sliding for composite optimizationMath. Program.20161591–2201235353592310.1007/s10107-015-0955-5 BeckATeboulleMA fast iterative shrinkage-thresholding algorithm for linear inverse problemsSIAM J. Imaging Sci.200921183202248652710.1137/080716542 EsserEZhangXChanTA general framework for a class of first order primal-dual algorithms for convex optimization in imaging scienceSIAM J. Imaging Sci.20103410151046276370610.1137/09076934X MonteiroRDSvaiterBFIteration-complexity of block-decomposition algorithms and the alternating direction method of multipliersSIAM J. Optim.2013231475507303311610.1137/110849468 CondatLA primal-dual splitting method for convex optimization involving lipschitzian, proximable and linear composite termsJ. Optim. Theory Appl.20131582460479308438610.1007/s10957-012-0245-9 JuditskyANesterovYDeterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimizationStoch. Syst.2014414480335321410.1287/10-SSY010 ChenYLanGOuyangYAccelerated schemes for a class of variational inequalitiesMath. Program.20171651113149370350010.1007/s10107-017-1161-4 HeBYuanXOn the O(1/n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}(1/n)$$\end{document} convergence rate of the douglas-rachford alternating direction methodSIAM J. Numer. Anal.2012502700709291428210.1137/110836936 Arjevani, Y., Shamir, O.: Dimension-free iteration complexity of finite sum optimization problems. In: Advances in Neural Information Processing Systems, pp. 3540–3548. (2016) Carmon Y, Duchi, J.C., Hinder, O., Sidford, A.: Lower bounds for finding stationary points I. (2017) arXiv preprint arXiv:1710.11606 ChambolleAPockTA first-order primal-dual algorithm for convex problems with applications to imagingJ. Math. Imaging Vis.2011401120145278212210.1007/s10851-010-0251-1 GoldsteinTO’DonoghueBSetzerSBaraniukRFast alternating direction optimization methodsSIAM J. Imaging Sci.20147315881623324085010.1137/120896219 Xu, Y.: Iteration complexity of inexact augmented lagrangian methods for constrained convex programming. (2017) arXiv preprint arXiv:1711.05812 GaoXXuYZhangSRandomized primal-dual proximal block coordinate updatesJ. Oper. Res. Soc. China201972205250395704110.1007/s40305-018-0232-4 1420_CR1 1420_CR2 1420_CR4 1420_CR5 Y Chen (1420_CR8) 2017; 165 A Nemirovski (1420_CR31) 1983 O Devolder (1420_CR10) 2014; 146 A Nemirovski (1420_CR29) 2004; 15 G Lan (1420_CR22) 2016; 159 G Lan (1420_CR26) 2018; 171 Y Nesterov (1420_CR36) 2013; 140 RT Rockafellar (1420_CR38) 2015 L Condat (1420_CR9) 2013; 158 A Nemirovsky (1420_CR33) 1991; 7 1420_CR40 Y Xu (1420_CR41) 2017; 27 1420_CR42 1420_CR21 C Guzmán (1420_CR15) 2015; 31 A Juditsky (1420_CR20) 2014; 4 1420_CR23 A Nemirovski (1420_CR30) 2009; 19 Y Chen (1420_CR7) 2014; 24 Y Ouyang (1420_CR37) 2015; 8 RD Monteiro (1420_CR28) 2013; 23 Y Nesterov (1420_CR35) 2005; 103 A Beck (1420_CR3) 2009; 2 1420_CR16 1420_CR39 1420_CR19 X Gao (1420_CR12) 2019; 7 A Chambolle (1420_CR6) 2011; 40 E Esser (1420_CR11) 2010; 3 T Goldstein (1420_CR14) 2014; 7 B He (1420_CR17) 2012; 50 Y He (1420_CR18) 2016; 26 AS Nemirovski (1420_CR32) 1992; 8 Y Nesterov (1420_CR34) 2004 G Lan (1420_CR25) 2016; 26 M Yan (1420_CR44) 2018; 76 G Lan (1420_CR24) 2016; 155 Y Xu (1420_CR43) 2018; 70 X Gao (1420_CR13) 2017; 5 RD Monteiro (1420_CR27) 2011; 21 |
References_xml | – reference: Arjevani, Y., Shamir, O.: On the iteration complexity of oblivious first-order optimization algorithms. In: International Conference on Machine Learning, pp. 908–916. (2016) – reference: Hamedani, E.Y., Aybat, N.S.: A primal-dual algorithm for general convex-concave saddle point problems. (2018) arXiv preprint arXiv:1803.01401 – reference: ChambolleAPockTA first-order primal-dual algorithm for convex problems with applications to imagingJ. Math. Imaging Vis.2011401120145278212210.1007/s10851-010-0251-1 – reference: HeBYuanXOn the O(1/n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}(1/n)$$\end{document} convergence rate of the douglas-rachford alternating direction methodSIAM J. Numer. Anal.2012502700709291428210.1137/110836936 – reference: Simchowitz, M.: On the randomized complexity of minimizing a convex quadratic function. (2018) arXiv preprint arXiv:1807.09386 – reference: NesterovYSmooth minimization of non-smooth functionsMath. Program.20051031127152216653710.1007/s10107-004-0552-5 – reference: LanGGradient sliding for composite optimizationMath. Program.20161591–2201235353592310.1007/s10107-015-0955-5 – reference: OuyangYChenYLanGPasiliaoEJrAn accelerated linearized alternating direction method of multipliersSIAM J. Imaging Sci.201581644681332497010.1137/14095697X – reference: Jaggi, M.: Revisiting frank-wolfe: Projection-free sparse convex optimization. In: ICML, vol. 1, pp. 427–435. (2013) – reference: ChenYLanGOuyangYAccelerated schemes for a class of variational inequalitiesMath. Program.20171651113149370350010.1007/s10107-017-1161-4 – reference: MonteiroRDSvaiterBFComplexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problemsSIAM J. Optim.201121416881720286951310.1137/100801652 – reference: NemirovskiAProx-method with rate of convergence O(1/t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}(1/t)$$\end{document} for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problemsSIAM J. Optim.2004151229251211298410.1137/S1052623403425629 – reference: HeYMonteiroRDAn accelerated hpe-type algorithm for a class of composite convex-concave saddle-point problemsSIAM J. Optim.20162612956343977710.1137/14096757X – reference: GoldsteinTO’DonoghueBSetzerSBaraniukRFast alternating direction optimization methodsSIAM J. Imaging Sci.20147315881623324085010.1137/120896219 – reference: XuYZhangSAccelerated primal-dual proximal block coordinate updating methods for constrained convex optimizationComput. Optim. Appl.201870191128378046210.1007/s10589-017-9972-z – reference: BeckATeboulleMA fast iterative shrinkage-thresholding algorithm for linear inverse problemsSIAM J. Imaging Sci.200921183202248652710.1137/080716542 – reference: Carmon Y, Duchi, J.C., Hinder, O., Sidford, A.: Lower bounds for finding stationary points II: First-order methods. (2017) arXiv preprint arXiv:1711.00841 – reference: NemirovskyAOn optimality of krylov’s information when solving linear operator equationsJ. Complexity199172121130110877210.1016/0885-064X(91)90001-E – reference: Lan, G., Ouyang, Y.: Accelerated gradient sliding for structured convex optimization. (2016) arXiv preprint arXiv:1609.04905 – reference: Carmon Y, Duchi, J.C., Hinder, O., Sidford, A.: Lower bounds for finding stationary points I. (2017) arXiv preprint arXiv:1710.11606 – reference: NesterovYGradient methods for minimizing composite functionsMath. Program.20131401125161307186510.1007/s10107-012-0629-5 – reference: Woodworth, B.E., Srebro, N.: Tight complexity bounds for optimizing composite objectives. In: Advances in Neural Information Processing Systems, pp. 3639–3647. (2016) – reference: XuYAccelerated first-order primal-dual proximal methods for linearly constrained composite convex programmingSIAM J. Optim.201727314591484367932410.1137/16M1082305 – reference: Lan, G.: The complexity of large-scale convex programming under a linear optimization oracle. (2013) arXiv preprint arXiv:1309.5550 – reference: LanGZhouYAn optimal randomized incremental gradient methodMath. Program.20181711–2167215384453710.1007/s10107-017-1173-0 – reference: EsserEZhangXChanTA general framework for a class of first order primal-dual algorithms for convex optimization in imaging scienceSIAM J. Imaging Sci.20103410151046276370610.1137/09076934X – reference: YanMA new primal-dual algorithm for minimizing the sum of three functions with a linear operatorJ. Sci. Comput.201876316981717383370710.1007/s10915-018-0680-3 – reference: NemirovskiAJuditskyALanGShapiroARobust stochastic approximation approach to stochastic programmingSIAM J. Optim.200919415741609248604110.1137/070704277 – reference: GuzmánCNemirovskiAOn lower complexity bounds for large-scale smooth convex optimizationJ. Complexity2015311114328454310.1016/j.jco.2014.08.003 – reference: Arjevani, Y., Shamir, O.: Dimension-free iteration complexity of finite sum optimization problems. In: Advances in Neural Information Processing Systems, pp. 3540–3548. (2016) – reference: GaoXXuYZhangSRandomized primal-dual proximal block coordinate updatesJ. Oper. Res. Soc. China201972205250395704110.1007/s40305-018-0232-4 – reference: DevolderOGlineurFNesterovYFirst-order methods of smooth convex optimization with inexact oracleMath. Program.20141461–23775323260810.1007/s10107-013-0677-5 – reference: NemirovskiAYudinDProblem Complexity and Method Efficiency in Optimization1983New YorkWiley-Interscience Series in Discrete Mathematics, Wiley – reference: RockafellarRTConvex Analysis2015PrincetonPrinceton University Press – reference: CondatLA primal-dual splitting method for convex optimization involving lipschitzian, proximable and linear composite termsJ. Optim. Theory Appl.20131582460479308438610.1007/s10957-012-0245-9 – reference: MonteiroRDSvaiterBFIteration-complexity of block-decomposition algorithms and the alternating direction method of multipliersSIAM J. Optim.2013231475507303311610.1137/110849468 – reference: GaoXZhangS-ZFirst-order algorithms for convex optimization with nonseparable objective and coupled constraintsJ. Oper. Res. Soc. China201752131159366308210.1007/s40305-016-0131-5 – reference: NemirovskiASInformation-based complexity of linear operator equationsJ. Complexity199282153175116791010.1016/0885-064X(92)90013-2 – reference: NesterovYIntroductory Lectures on Convex Optimization: A Basic Course2004DordrechtKluwer Academic Publisher10.1007/978-1-4419-8853-9 – reference: ChenYLanGOuyangYOptimal primal-dual methods for a class of saddle point problemsSIAM J. Optim.201424417791814327262710.1137/130919362 – reference: LanGRenatoDMonteiroCIteration-complexity of first-order augmented lagrangian methods for convex programmingMath. Program.20161551–2511547343981110.1007/s10107-015-0861-x – reference: JuditskyANesterovYDeterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimizationStoch. Syst.2014414480335321410.1287/10-SSY010 – reference: LanGZhouYConditional gradient sliding for convex optimizationSIAM J. Optim.201626213791409351687110.1137/140992382 – reference: Xu, Y.: Iteration complexity of inexact augmented lagrangian methods for constrained convex programming. (2017) arXiv preprint arXiv:1711.05812 – volume: 146 start-page: 37 issue: 1–2 year: 2014 ident: 1420_CR10 publication-title: Math. Program. doi: 10.1007/s10107-013-0677-5 – volume: 24 start-page: 1779 issue: 4 year: 2014 ident: 1420_CR7 publication-title: SIAM J. Optim. doi: 10.1137/130919362 – volume: 158 start-page: 460 issue: 2 year: 2013 ident: 1420_CR9 publication-title: J. Optim. Theory Appl. doi: 10.1007/s10957-012-0245-9 – volume: 7 start-page: 205 issue: 2 year: 2019 ident: 1420_CR12 publication-title: J. Oper. Res. Soc. China doi: 10.1007/s40305-018-0232-4 – volume: 5 start-page: 131 issue: 2 year: 2017 ident: 1420_CR13 publication-title: J. Oper. Res. Soc. China doi: 10.1007/s40305-016-0131-5 – volume: 70 start-page: 91 issue: 1 year: 2018 ident: 1420_CR43 publication-title: Comput. Optim. Appl. doi: 10.1007/s10589-017-9972-z – volume: 165 start-page: 113 issue: 1 year: 2017 ident: 1420_CR8 publication-title: Math. Program. doi: 10.1007/s10107-017-1161-4 – volume: 15 start-page: 229 issue: 1 year: 2004 ident: 1420_CR29 publication-title: SIAM J. Optim. doi: 10.1137/S1052623403425629 – ident: 1420_CR19 – ident: 1420_CR42 – ident: 1420_CR40 – ident: 1420_CR21 – ident: 1420_CR5 – volume: 140 start-page: 125 issue: 1 year: 2013 ident: 1420_CR36 publication-title: Math. Program. doi: 10.1007/s10107-012-0629-5 – volume: 40 start-page: 120 issue: 1 year: 2011 ident: 1420_CR6 publication-title: J. Math. Imaging Vis. doi: 10.1007/s10851-010-0251-1 – volume: 7 start-page: 121 issue: 2 year: 1991 ident: 1420_CR33 publication-title: J. Complexity doi: 10.1016/0885-064X(91)90001-E – ident: 1420_CR1 – ident: 1420_CR23 – volume: 8 start-page: 644 issue: 1 year: 2015 ident: 1420_CR37 publication-title: SIAM J. Imaging Sci. doi: 10.1137/14095697X – volume: 171 start-page: 167 issue: 1–2 year: 2018 ident: 1420_CR26 publication-title: Math. Program. doi: 10.1007/s10107-017-1173-0 – volume: 8 start-page: 153 issue: 2 year: 1992 ident: 1420_CR32 publication-title: J. Complexity doi: 10.1016/0885-064X(92)90013-2 – volume: 155 start-page: 511 issue: 1–2 year: 2016 ident: 1420_CR24 publication-title: Math. Program. doi: 10.1007/s10107-015-0861-x – volume-title: Problem Complexity and Method Efficiency in Optimization year: 1983 ident: 1420_CR31 – volume: 26 start-page: 29 issue: 1 year: 2016 ident: 1420_CR18 publication-title: SIAM J. Optim. doi: 10.1137/14096757X – volume: 27 start-page: 1459 issue: 3 year: 2017 ident: 1420_CR41 publication-title: SIAM J. Optim. doi: 10.1137/16M1082305 – volume: 19 start-page: 1574 issue: 4 year: 2009 ident: 1420_CR30 publication-title: SIAM J. Optim. doi: 10.1137/070704277 – volume: 103 start-page: 127 issue: 1 year: 2005 ident: 1420_CR35 publication-title: Math. Program. doi: 10.1007/s10107-004-0552-5 – volume: 23 start-page: 475 issue: 1 year: 2013 ident: 1420_CR28 publication-title: SIAM J. Optim. doi: 10.1137/110849468 – volume: 2 start-page: 183 issue: 1 year: 2009 ident: 1420_CR3 publication-title: SIAM J. Imaging Sci. doi: 10.1137/080716542 – volume: 31 start-page: 1 issue: 1 year: 2015 ident: 1420_CR15 publication-title: J. Complexity doi: 10.1016/j.jco.2014.08.003 – ident: 1420_CR39 – volume: 4 start-page: 44 issue: 1 year: 2014 ident: 1420_CR20 publication-title: Stoch. Syst. doi: 10.1287/10-SSY010 – volume: 159 start-page: 201 issue: 1–2 year: 2016 ident: 1420_CR22 publication-title: Math. Program. doi: 10.1007/s10107-015-0955-5 – ident: 1420_CR16 – ident: 1420_CR4 – volume: 3 start-page: 1015 issue: 4 year: 2010 ident: 1420_CR11 publication-title: SIAM J. Imaging Sci. doi: 10.1137/09076934X – volume-title: Introductory Lectures on Convex Optimization: A Basic Course year: 2004 ident: 1420_CR34 doi: 10.1007/978-1-4419-8853-9 – volume: 50 start-page: 700 issue: 2 year: 2012 ident: 1420_CR17 publication-title: SIAM J. Numer. Anal. doi: 10.1137/110836936 – volume: 21 start-page: 1688 issue: 4 year: 2011 ident: 1420_CR27 publication-title: SIAM J. Optim. doi: 10.1137/100801652 – ident: 1420_CR2 – volume: 7 start-page: 1588 issue: 3 year: 2014 ident: 1420_CR14 publication-title: SIAM J. Imaging Sci. doi: 10.1137/120896219 – volume: 76 start-page: 1698 issue: 3 year: 2018 ident: 1420_CR44 publication-title: J. Sci. Comput. doi: 10.1007/s10915-018-0680-3 – volume: 26 start-page: 1379 issue: 2 year: 2016 ident: 1420_CR25 publication-title: SIAM J. Optim. doi: 10.1137/140992382 – volume-title: Convex Analysis year: 2015 ident: 1420_CR38 |
SSID | ssj0001388 |
Score | 2.653909 |
Snippet | On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | Calculus of Variations and Optimal Control; Optimization Combinatorics Complexity Convergence Convex analysis Convexity Full Length Paper Mathematical and Computational Physics Mathematical Methods in Physics Mathematical programming Mathematics Mathematics and Statistics Mathematics of Computing Methods Numerical Analysis Optimization Production methods Saddle points Theoretical |
Title | Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems |
URI | https://link.springer.com/article/10.1007/s10107-019-01420-0 https://www.proquest.com/docview/2269225754 https://www.proquest.com/docview/2478787922 |
Volume | 185 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwGA3iLnoQf-J0jhy8aaBt0iQ9Dtkc6jxtME81TVIYSDfslP35fsnabcoUPJWStIUvafK-5OU9hK6FjrlgWUAgUZOEWS2JNIYS5eTEAxNSodw65OCZ90fsYRyPq0NhZc12r7ck_Ui9cdgt9DRJx-9hkPRAot6IIXd3_XoUdVbjb0ilrI1aHTqojspsf8f36WiNMX9si_rZpneIDiqYiDvLdj1CO7Y4Rvsb4oFwN1gprpYn6PXJ2Z1hzxC3C4DWOHOGSSWe5jifAMQjXmQTLx2jSwxYFXvG-YLARatPix1NtoCOj0vlFi3IbDop5rhynClP0bDXHd71SeWeQDRlYk4g9twEuVZKxIm1UuSSa-85xW2oQ2MikVMVQVGks4DDZOmUX4SKeSKtDOgZ2i2mhT1HWCSxVtJwQUPNjFIZ45BFmSR3Si9UiSYK6ximulIWdwYXb-laE9nFPYW4pz7uadBEN6tnZktdjT9rt-qmSat_rEwBOCYwGomYbS92skNSQJUmuq1bc138-8cu_lf9Eu1Fjufil2VaaHf-_mGvAKjMszZqdO5fHrtt3z-_AAiE3zU |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTsQgFL0x40Jd-DaOTxbuFNNOKdClMeqoM67GRFeVAk2MpmNsNcav98K0M2rUxFXTQF9A4Vw4nAOwJ3TMBcsCioGapMxqSaUxEVVOTjwwYSSUm4fsX_HuNbu4iW_qTWFlw3ZvliR9T_1ps1voaZKO38Mw6MFAfZphDB63YPro7PbyZNwDh5GUjVWrwwf1Zpmf7_J1QJqgzG8Lo368OV2A6-ZNRzSTh8OXKjvU799EHP_7KYswXwNQcjRqMUswZYtlmPskS4hn_bGWa7kCdz1npEY899y-IWgnmbNiKskwJ_k9gkfq5TvJyIu6JIiCieeyv1E8aPVqiSPgFvhLkVK56RD6NLwvKlJ72ZSrMDg9GRx3ae3LQHXEREWxVrkJcq2UiBNrpcgl197NittQh8Z0RB6pDiZ1dBZwHIadpoxQMU-klUG0Bq1iWNh1ICKJtZKGiyjUzCiVMY7xmUlypyETKdGGsKmbVNea5c464zGdqC27okyxKFNflGnQhv3xNU8jxY4_c281VZ7Wf2-ZIiRNsJ8TMfs52QkaSYFZ2nDQVPAk-feHbfwv-y7MdAf9Xto7v7rchNmOY9P4yZ8taFXPL3Yb4VCV7dSt_wOWwv12 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA6iIHoQf-J0ag7eNKw_k_Q41DF1Gx422K2mSQoD6Yatsj_fl7TdpkzBUylJW3ivSb6XvPd9CF0zGVIWJA6BQI2TQEtOuFI-EYZO3FGuz4TZh-wPaHcUPI3D8UoVv812r48ky5oGw9KUFa2ZSlsrhW-uTZk0uT4BBEAQtG_BdOyapK6R117Mxa7PeS3aapBCVTaz_h3fl6Yl3vxxRGpXns4-2qsgI26XPj5AGzo7RLsrRIJw11-wr-ZH6LVnpM-wzRbXc4DZODHiSTmepjidANwjlnATl-rROQbcim32-ZzARYpPjU3KbAaDAOfCbGCQ2XSSFbhSn8mP0bDzMLzrkkpJgUg_YAUBP1DlpFIIFkZac5ZyKq3-FNWudJXyWOoLD5o8mTgUFk7DAsNESCOuueOfoM1smulThFkUSsEVZb4rAyVEElCIqFSUGtYXX7AGcmsbxrJiGTdiF2_xkh_Z2D0Gu8fW7rHTQDeLZ2Ylx8afvZu1a-JqvOUxgMgIZiYWBuubDQURZ9ClgW5rby6bf__Y2f-6X6Htl_tO3HscPJ-jHc-kv9jdmibaLN4_9AXglyK5tL_oF8yS5NA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lower+complexity+bounds+of+first-order+methods+for+convex-concave+bilinear+saddle-point+problems&rft.jtitle=Mathematical+programming&rft.au=Ouyang%2C+Yuyuan&rft.au=Xu%2C+Yangyang&rft.date=2021-01-01&rft.issn=0025-5610&rft.eissn=1436-4646&rft.volume=185&rft.issue=1-2&rft.spage=1&rft.epage=35&rft_id=info:doi/10.1007%2Fs10107-019-01420-0&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s10107_019_01420_0 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0025-5610&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0025-5610&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0025-5610&client=summon |