Zeroth-order algorithms for nonconvex–strongly-concave minimax problems with improved complexities
In this paper, we study zeroth-order algorithms for minimax optimization problems that are nonconvex in one variable and strongly-concave in the other variable. Such minimax optimization problems have attracted significant attention lately due to their applications in modern machine learning tasks....
Saved in:
Published in | Journal of global optimization Vol. 87; no. 2-4; pp. 709 - 740 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.11.2023
Springer |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this paper, we study zeroth-order algorithms for minimax optimization problems that are nonconvex in one variable and strongly-concave in the other variable. Such minimax optimization problems have attracted significant attention lately due to their applications in modern machine learning tasks. We first consider a deterministic version of the problem. We design and analyze the Zeroth-Order Gradient Descent Ascent (ZO-GDA) algorithm, and provide improved results compared to existing works, in terms of oracle complexity. We also propose the Zeroth-Order Gradient Descent Multi-Step Ascent (ZO-GDMSA) algorithm that significantly improves the oracle complexity of ZO-GDA. We then consider stochastic versions of ZO-GDA and ZO-GDMSA, to handle stochastic nonconvex minimax problems. For this case, we provide oracle complexity results under two assumptions on the stochastic gradient: (i) the uniformly bounded variance assumption, which is common in traditional stochastic optimization, and (ii) the Strong Growth Condition (SGC), which has been known to be satisfied by modern over-parameterized machine learning models. We establish that under the SGC assumption, the complexities of the stochastic algorithms match that of deterministic algorithms. Numerical experiments are presented to support our theoretical results. |
---|---|
AbstractList | In this paper, we study zeroth-order algorithms for minimax optimization problems that are nonconvex in one variable and strongly-concave in the other variable. Such minimax optimization problems have attracted significant attention lately due to their applications in modern machine learning tasks. We first consider a deterministic version of the problem. We design and analyze the Zeroth-Order Gradient Descent Ascent (ZO-GDA) algorithm, and provide improved results compared to existing works, in terms of oracle complexity. We also propose the Zeroth-Order Gradient Descent Multi-Step Ascent (ZO-GDMSA) algorithm that significantly improves the oracle complexity of ZO-GDA. We then consider stochastic versions of ZO-GDA and ZO-GDMSA, to handle stochastic nonconvex minimax problems. For this case, we provide oracle complexity results under two assumptions on the stochastic gradient: (i) the uniformly bounded variance assumption, which is common in traditional stochastic optimization, and (ii) the Strong Growth Condition (SGC), which has been known to be satisfied by modern over-parameterized machine learning models. We establish that under the SGC assumption, the complexities of the stochastic algorithms match that of deterministic algorithms. Numerical experiments are presented to support our theoretical results. |
Audience | Academic |
Author | Ma, Shiqian Balasubramanian, Krishnakumar Wang, Zhongruo Razaviyayn, Meisam |
Author_xml | – sequence: 1 givenname: Zhongruo orcidid: 0000-0001-9049-4291 surname: Wang fullname: Wang, Zhongruo organization: Department of Mathematics, University of California – sequence: 2 givenname: Krishnakumar surname: Balasubramanian fullname: Balasubramanian, Krishnakumar organization: Department of Statistics, University of California – sequence: 3 givenname: Shiqian orcidid: 0000-0003-1967-1069 surname: Ma fullname: Ma, Shiqian email: sqma@ucdavis.edu organization: Department of Mathematics, University of California – sequence: 4 givenname: Meisam surname: Razaviyayn fullname: Razaviyayn, Meisam organization: Department of Industrial and Systems Engineering, University of Southern California |
BookMark | eNp9kM1KAzEQgINUsFVfwNO-QHSS_UuOpfgHBS968RLSzWyN7CYlWWt78x18Q5_E6HryUDIQZphvZvhmZOK8Q0IuGFwygPoqMhBSUOCcAmMVUDgiU1bWOeWSVRMyBclLWgKwEzKL8RUApCj5lJhnDH54oT4YDJnu1j7Y4aWPWetDlpY03m1x9_XxGYfg3brb01Rp9Baz3jrb6122CX7VYSLeE5jZPuVbNFnj-02HOztYjGfkuNVdxPO__5Q83Vw_Lu7o8uH2fjFf0ibPYaCiMJWRJjccoCylwKqQtawFZ6wQLeRFKXUOmqdgq3LFuDBosBKrttUaqiI_JZfj3LXuUFnX-iHoJj2DvU13Y2tTfV7XDGpWcZYAMQJN8DEGbFVjBz1Y7xJoO8VA_ehVo16V9KpfvQoSyv-hm5B8hP1hKB-hmJrdGoN69W_BJSeHqG9XqZIZ |
CitedBy_id | crossref_primary_10_1137_23M1568168 |
Cites_doi | 10.1137/120880811 10.1145/3278721.3278779 10.1007/978-3-030-60990-0_12 10.1007/s10898-009-9496-x 10.1007/978-94-010-0189-2 10.1109/BigData.2018.8622525 10.1137/1.9781611975031.172 10.1137/1.9780898718768 10.1007/978-3-319-91578-4 10.1145/3128572.3140448 10.1007/978-3-319-68913-5 10.1007/978-1-4419-8853-9 10.1145/3055399.3055403 10.1007/s10208-021-09499-8 10.1109/TSP.2020.2986363 10.1007/s10898-012-9951-y 10.1007/s10208-015-9296-2 10.1007/978-3-030-31978-6_7 10.1007/s10898-018-0688-0 10.1613/jair.613 10.1063/1.5089993 10.1145/1961189.1961199 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 COPYRIGHT 2023 Springer |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 – notice: COPYRIGHT 2023 Springer |
DBID | AAYXX CITATION |
DOI | 10.1007/s10898-022-01160-0 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Mathematics Sciences (General) Computer Science |
EISSN | 1573-2916 |
EndPage | 740 |
ExternalDocumentID | A771071621 10_1007_s10898_022_01160_0 |
GrantInformation_xml | – fundername: National Science Foundation grantid: DMS-1953210; CCF-2007797 funderid: http://dx.doi.org/10.13039/100000001 |
GroupedDBID | -52 -57 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29K 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 78A 7WY 88I 8AO 8FE 8FG 8FL 8G5 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJCF ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTAH ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACGOD ACHSB ACHXU ACIWK ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AI. AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYQZM AZFZN AZQEC B-. BA0 BAPOH BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP D-I DDRTE DL5 DNIVK DPUIP DU5 DWQXO EBLON EBS EDO EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IAO IHE IJ- IKXTQ ITC ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW L6V LAK LLZTM M0C M0N M2O M2P M4Y M7S MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9M PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 PTHSS Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SBE SCLPG SDD SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TN5 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW VH1 W23 W48 WK8 YLTOR Z45 Z5O Z7R Z7X Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8T Z8U Z8W Z92 ZMTXR ZWQNP ZY4 ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP AMVHM ATHPR AYFIA CITATION PHGZM PHGZT AEIIB PMFND |
ID | FETCH-LOGICAL-c330t-84d6d9d3d2005598e649797821148f03459a30a20a21b5b128dede68bffaa0643 |
IEDL.DBID | U2A |
ISSN | 0925-5001 |
IngestDate | Tue Jun 10 21:02:42 EDT 2025 Thu Apr 24 22:54:11 EDT 2025 Tue Jul 01 00:53:01 EDT 2025 Fri Feb 21 02:44:09 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2-4 |
Keywords | Stochastic algorithms Oracle complexity Gradient descent ascent Minimax problem Zeroth-order algorithms |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c330t-84d6d9d3d2005598e649797821148f03459a30a20a21b5b128dede68bffaa0643 |
ORCID | 0000-0001-9049-4291 0000-0003-1967-1069 |
PageCount | 32 |
ParticipantIDs | gale_infotracacademiconefile_A771071621 crossref_citationtrail_10_1007_s10898_022_01160_0 crossref_primary_10_1007_s10898_022_01160_0 springer_journals_10_1007_s10898_022_01160_0 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20231100 2023-11-00 20231101 |
PublicationDateYYYYMMDD | 2023-11-01 |
PublicationDate_xml | – month: 11 year: 2023 text: 20231100 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationSubtitle | An International Journal Dealing with Theoretical and Computational Aspects of Seeking Global Optima and Their Applications in Science, Management and Engineering |
PublicationTitle | Journal of global optimization |
PublicationTitleAbbrev | J Glob Optim |
PublicationYear | 2023 |
Publisher | Springer US Springer |
Publisher_xml | – name: Springer US – name: Springer |
References | Lin, T., Jin, C., Jordan, M.I.: On gradient descent ascent for nonconvex–concave minimax problems. In: Proceedings of the International Conference on Machine Learning (ICML) (2020) Daskalakis, C., Panageas, I.: The limit points of (optimistic) gradient descent in min–max optimization. In: Advances in Neural Information Processing Systems, pp. 9236–9246 (2018) RiosLSahinidisNDerivative-free optimization: a review of algorithms and comparison of software implementationsJ. Glob. Optim.201356312471293307015410.1007/s10898-012-9951-y1272.90116 Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: International Conference on Representation Learning (2017) Hsieh, Y.-P., Liu, C., Cevher, V.: Finding mixed Nash equilibria of generative adversarial networks. In: International Conference on Machine Learning, pp. 2810–2819. PMLR (2019) MoriartyDESchultzACGrefenstetteJJEvolutionary algorithms for reinforcement learningJ. Artif. Intell. Res.19991124127610.1613/jair.6130924.68157 Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013) Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. In: International Conference on Learning Representations (2018) Lu, S., Tsaknakis, I., Hong, M., Chen, Y.: Hybrid block successive approximation for one-sided non-convex min–max problems: algorithms and applications. arXiv preprint arXiv:1902.08294 (2019) Wang, Z., Jegelka, S.: Max-value entropy search for efficient Bayesian optimization. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3627–3635. JMLR.org (2017) BalasubramanianKGhadimiSZeroth-order nonconvex stochastic optimization: handling constraints, high-dimensionality, and saddle-pointsFound. Comput. Math.2021223576437658810.1007/s10208-021-09499-81516.90056 Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J., Song, L.: SBEED: convergent reinforcement learning with nonlinear function approximation. In: Proceedings of the International Conference on Machine Learning (ICML) (2018) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014) NesterovYEIntroductory Lectures on Convex Optimization: A Basic Course. Applied Optimization2004BostonKluwer Academic Publishers10.1007/978-1-4419-8853-91086.90045 Huang, F., Gao, S., Pei, J., Huang, H.: Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization. https://arxiv.org/pdf/2008.08170.pdf (2020) Ma, S., Bassily, R., Belkin, M.: The power of interpolation: understanding the effectiveness of SGD in modern over-parametrized learning. In: International Conference on Machine Learning, pp. 3325–3334 (2018) Meng, S.Y., Vaswani, S., Laradji, I.H., Schmidt, M., Lacoste-Julien, S.: Fast and furious convergence: stochastic second order methods under interpolation. In: International Conference on Artificial Intelligence and Statistics, pp. 1375–1386 (2020) Nouiehed, M., Sanjabi, M., Huang, T., Lee, J., Razaviyayn, M.: Solving a class of non-convex min–max games using iterative first order methods. In: Advances in Neural Information Processing Systems, pp. 14905–14916 (2019) Roy, A., Balasubramanian, K., Ghadimi, S., Mohapatra, P.: Escaping saddle-points faster under interpolation-like conditions. In: Advances in Neural Information Processing Systems (2020) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2017) AudetCHareWDerivative-Free and Blackbox Optimization2017BerlinSpringer10.1007/978-3-319-68913-51391.90001 Xu, T., Wang, Z., Liang, Y., Vincent Poor, H.: Enhanced first and zeroth order variance reduced algorithms for min–max optimization. arXiv preprint arXiv:2006.09361 (2020) Jin, C., Netrapalli, P., Jordan, M.: What is local optimality in nonconvex–nonconcave minimax optimization? In International Conference on Machine Learning, pp. 4880–4889. PMLR (2020) Bassily, R., Belkin, M., Ma, S.: On exponential convergence of SGD in non-convex over-parametrized learning. arXiv preprint arXiv:1811.02564 (2018) ConnAScheinbergKVicenteLIntroduction to Derivative-Free Optimization2009PhiladelphiaSIAM10.1137/1.97808987187681163.49001 Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012) Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H.: Training GANs with optimism. In: International Conference on Learning Representations (ICLR) (2018) Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, vol. 6, pp. 583–603. University of California Press (1972) Bogunovic, I., Scarlett, J., Jegelka, S., Cevher, V.: Adversarially robust optimization with Gaussian processes. In: Advances in Neural Information Processing Systems, pp. 5760–5770 (2018) Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM (2017) Xu, T., Zhe Wang, Z., Liang, Y., Poor, H.V.: Gradient free minimax optimization: variance reduction and faster convergence. https://arxiv.org/pdf/2006.09361.pdf (2021) Namkoong, H., Duchi, J.C.: Stochastic gradient methods for distributionally robust optimization with f-divergences. In: Advances in Neural Information Processing Systems, pp. 2208–2216 (2016) DuaDGraffCUCI Machine Learning Repository2017IrvineUniversity of California Mertikopoulos, P., Papadimitriou, C., Piliouras, G.: Cycles in adversarial regularized learning. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2703–2717. SIAM (2018) Al-Dujaili, A., Srikant, S., Hemberg, E., O’Reilly, U.-M.: On the application of Danskin’s theorem to derivative-free minimax optimization. arXiv preprint arXiv:1805.06322 (2018) Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69 (2018) Roy, A., Chen, Y., Balasubramanian, K., Mohapatra, P.: Online and bandit algorithms for nonstationary stochastic saddle-point optimization. arXiv preprint arXiv:1912.01698 (2019) Pfau, D., Vinyals, O.: Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945 (2016) Thekumparampil, K., Jain, P., Netrapalli, P., Oh, S.: Efficient algorithms for smooth minimax optimization. In: Advances in Neural Information Processing Systems, pp. 12659–12670 (2019) Wei, C.-Y., Hong, Y.-T., Lu, C.-J.: Online reinforcement learning in stochastic games. In: Advances in Neural Information Processing Systems, pp. 4987–4997 (2017) Luo, L., Ye, H., Huang, Z., Zhang, T.: Stochastic recursive gradient descent ascent for stochastic nonconvex–strongly-concave minimax problems. In: Advances in Neural Information Processing Systems, vol. 33 (2020) FilarJVriezeKCompetitive Markov Decision Processes2012BerlinSpringer0934.91002 Bubeck, S., Lee, Y.T., Eldan, R.: Kernel-based methods for bandit convex optimization. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 72–85. ACM (2017) Vlatakis-GkaragkounisE-VFlokasLPiliourasGPoincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum gamesAdv. Neural Inf. Process. Syst.2019321045010461 Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. ACM (2018) Baharlouei, S., Nouiehed, M., Razaviyayn, M.: Rényi fair inference. In: International Conference on Learning Representation (2019) Ying, Y., Wen, L., Lyu, S.: Stochastic online AUC maximization. In: Advances in Neural Information Processing Systems, pp. 451–459 (2016) Zhang, K., Yang, Z., Başar, T.: Multi-agent reinforcement learning: a selective overview of theories and algorithms. In: Handbook of Reinforcement Learning and Control, pp. 321–384 (2021) BertsimasDNohadaniORobust optimization with simulated annealingJ. Glob. Optim.2010482323334272176910.1007/s10898-009-9496-x1198.90402 Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011). Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm Liu, S., Lu, S., Chen, X., Feng, Y., Xu, K., Al-Dujaili, A., Hong, M., Obelilly, U.-M.: Min–max optimization without gradients: convergence and applications to adversarial ml. In: Proceedings of the 37th International Conference on Machine Learning (ICML) (2020) Balasubramanian, K., Ghadimi, S.: Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In: Advances in Neural Information Processing Systems, pp. 3455–3464 (2018) Anagnostidis, S., Lucchi, A., Diouane, Y.: Direct-search methods for a class of non-convex min–max games. In: AISTATS (2021) Oliehoek, F.A., Savani, R., Gallego, J., van der Pol, E., Groß, R.: Beyond local Nash equilibria for adversarial networks. arXiv preprint arXiv:1806.07268 (2018) NeymanASorinSSorinSStochastic Games and Applications2003BerlinSpringer10.1007/978-94-010-0189-21027.00040 Vaswani, S., Bach, F., Schmidt, M.: Fast and faster convergence of SGD for over-parameterized models and an accelerated perceptron. In: The 22nd International Conference on Artificial Intellig E-V Vlatakis-Gkaragkounis (1160_CR59) 2019; 32 S Ghadimi (1160_CR20) 2013; 23 L Rios (1160_CR48) 2013; 56 1160_CR50 Y Nesterov (1160_CR39) 2018 1160_CR13 1160_CR57 1160_CR58 1160_CR11 1160_CR55 1160_CR12 1160_CR56 1160_CR53 1160_CR10 1160_CR54 1160_CR51 1160_CR52 1160_CR1 1160_CR17 1160_CR3 1160_CR15 1160_CR2 1160_CR16 D Dua (1160_CR18) 2017 1160_CR60 1160_CR61 1160_CR24 1160_CR25 1160_CR22 1160_CR66 1160_CR23 1160_CR67 1160_CR64 1160_CR21 1160_CR65 K Balasubramanian (1160_CR7) 2021; 22 1160_CR62 1160_CR63 1160_CR28 1160_CR29 1160_CR26 C Audet (1160_CR4) 2017 1160_CR27 DE Moriarty (1160_CR36) 1999; 11 A Conn (1160_CR14) 2009 V Picheny (1160_CR45) 2019; 73 1160_CR35 1160_CR33 1160_CR31 1160_CR32 1160_CR30 J Filar (1160_CR19) 2012 1160_CR37 A Neyman (1160_CR41) 2003 1160_CR5 Y Nesterov (1160_CR40) 2017; 17 1160_CR6 D Bertsimas (1160_CR9) 2010; 48 1160_CR8 YE Nesterov (1160_CR38) 2004 1160_CR46 M Menickelly (1160_CR34) 2018; 179 1160_CR47 1160_CR44 1160_CR42 1160_CR43 1160_CR49 |
References_xml | – reference: Jin, C., Netrapalli, P., Jordan, M.: What is local optimality in nonconvex–nonconcave minimax optimization? In International Conference on Machine Learning, pp. 4880–4889. PMLR (2020) – reference: Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2017) – reference: Vlatakis-GkaragkounisE-VFlokasLPiliourasGPoincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum gamesAdv. Neural Inf. Process. Syst.2019321045010461 – reference: Salimans, T., Ho, J., Chen, X., Sidor, S., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017) – reference: MenickellyMWildSMDerivative-free robust optimization by outer approximationsMath. Program.201817913740509291435.90096 – reference: Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H.: Training GANs with optimism. In: International Conference on Learning Representations (ICLR) (2018) – reference: Meng, S.Y., Vaswani, S., Laradji, I.H., Schmidt, M., Lacoste-Julien, S.: Fast and furious convergence: stochastic second order methods under interpolation. In: International Conference on Artificial Intelligence and Statistics, pp. 1375–1386 (2020) – reference: Bassily, R., Belkin, M., Ma, S.: On exponential convergence of SGD in non-convex over-parametrized learning. arXiv preprint arXiv:1811.02564 (2018) – reference: Ma, S., Bassily, R., Belkin, M.: The power of interpolation: understanding the effectiveness of SGD in modern over-parametrized learning. In: International Conference on Machine Learning, pp. 3325–3334 (2018) – reference: Zhang, K., Yang, Z., Başar, T.: Multi-agent reinforcement learning: a selective overview of theories and algorithms. In: Handbook of Reinforcement Learning and Control, pp. 321–384 (2021) – reference: Piliouras, G., Schulman, L.J.: Learning dynamics and the co-evolution of competing sexual species. In: 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2018) – reference: NeymanASorinSSorinSStochastic Games and Applications2003BerlinSpringer10.1007/978-94-010-0189-21027.00040 – reference: Lu, S., Tsaknakis, I., Hong, M., Chen, Y.: Hybrid block successive approximation for one-sided non-convex min–max problems: algorithms and applications. arXiv preprint arXiv:1902.08294 (2019) – reference: Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J., Song, L.: SBEED: convergent reinforcement learning with nonlinear function approximation. In: Proceedings of the International Conference on Machine Learning (ICML) (2018) – reference: RiosLSahinidisNDerivative-free optimization: a review of algorithms and comparison of software implementationsJ. Glob. Optim.201356312471293307015410.1007/s10898-012-9951-y1272.90116 – reference: GhadimiSLanGStochastic first- and zeroth-order methods for nonconvex stochastic programmingSIAM J. Optim.20132323412368313443910.1137/1208808111295.90026 – reference: Vaswani, S., Bach, F., Schmidt, M.: Fast and faster convergence of SGD for over-parameterized models and an accelerated perceptron. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1195–1204. PMLR (2019) – reference: Al-Dujaili, A., Srikant, S., Hemberg, E., O’Reilly, U.-M.: On the application of Danskin’s theorem to derivative-free minimax optimization. arXiv preprint arXiv:1805.06322 (2018) – reference: Namkoong, H., Duchi, J.C.: Stochastic gradient methods for distributionally robust optimization with f-divergences. In: Advances in Neural Information Processing Systems, pp. 2208–2216 (2016) – reference: Baharlouei, S., Nouiehed, M., Razaviyayn, M.: Rényi fair inference. In: International Conference on Learning Representation (2019) – reference: PichenyVBinoisMHabbalAA Bayesian optimization approach to find Nash equilibriaJ. Glob. Optim.2019731171192389665410.1007/s10898-018-0688-01410.91030 – reference: Xu, T., Zhe Wang, Z., Liang, Y., Poor, H.V.: Gradient free minimax optimization: variance reduction and faster convergence. https://arxiv.org/pdf/2006.09361.pdf (2021) – reference: BalasubramanianKGhadimiSZeroth-order nonconvex stochastic optimization: handling constraints, high-dimensionality, and saddle-pointsFound. Comput. Math.2021223576437658810.1007/s10208-021-09499-81516.90056 – reference: Rafique, H., Liu, M., Lin, Q., Yang, T.: Non-convex min–max optimization: provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060 (2018) – reference: DuaDGraffCUCI Machine Learning Repository2017IrvineUniversity of California – reference: Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. In: International Conference on Learning Representations (2018) – reference: Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013) – reference: Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM (2017) – reference: Huang, F., Gao, S., Pei, J., Huang, H.: Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization. https://arxiv.org/pdf/2008.08170.pdf (2020) – reference: Sanjabi, M., Ba, J., Razaviyayn, M., Lee, J.D.: On the convergence and robustness of training gans with regularized optimal transport. In: Advances in Neural Information Processing Systems, pp. 7091–7101 (2018) – reference: FilarJVriezeKCompetitive Markov Decision Processes2012BerlinSpringer0934.91002 – reference: Bubeck, S., Lee, Y.T., Eldan, R.: Kernel-based methods for bandit convex optimization. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 72–85. ACM (2017) – reference: Wei, C.-Y., Hong, Y.-T., Lu, C.-J.: Online reinforcement learning in stochastic games. In: Advances in Neural Information Processing Systems, pp. 4987–4997 (2017) – reference: Daskalakis, C., Panageas, I.: The limit points of (optimistic) gradient descent in min–max optimization. In: Advances in Neural Information Processing Systems, pp. 9236–9246 (2018) – reference: Liu, S., Lu, S., Chen, X., Feng, Y., Xu, K., Al-Dujaili, A., Hong, M., Obelilly, U.-M.: Min–max optimization without gradients: convergence and applications to adversarial ml. In: Proceedings of the 37th International Conference on Machine Learning (ICML) (2020) – reference: ConnAScheinbergKVicenteLIntroduction to Derivative-Free Optimization2009PhiladelphiaSIAM10.1137/1.97808987187681163.49001 – reference: Xu, T., Wang, Z., Liang, Y., Vincent Poor, H.: Enhanced first and zeroth order variance reduced algorithms for min–max optimization. arXiv preprint arXiv:2006.09361 (2020) – reference: Xu, D., Yuan, S., Zhang, L., Wu, X.: Fairgan: fairness-aware generative adversarial networks. In: IEEE International Conference on Big Data (Big Data), pp. 570–575. IEEE (2018) – reference: Thekumparampil, K., Jain, P., Netrapalli, P., Oh, S.: Efficient algorithms for smooth minimax optimization. In: Advances in Neural Information Processing Systems, pp. 12659–12670 (2019) – reference: Hsieh, Y.-P., Liu, C., Cevher, V.: Finding mixed Nash equilibria of generative adversarial networks. In: International Conference on Machine Learning, pp. 2810–2819. PMLR (2019) – reference: Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. ACM (2018) – reference: Balasubramanian, K., Ghadimi, S.: Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In: Advances in Neural Information Processing Systems, pp. 3455–3464 (2018) – reference: Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69 (2018) – reference: Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011). Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm – reference: Lin, T., Jin, C., Jordan, M.I.: On gradient descent ascent for nonconvex–concave minimax problems. In: Proceedings of the International Conference on Machine Learning (ICML) (2020) – reference: Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014) – reference: Roy, A., Chen, Y., Balasubramanian, K., Mohapatra, P.: Online and bandit algorithms for nonstationary stochastic saddle-point optimization. arXiv preprint arXiv:1912.01698 (2019) – reference: BertsimasDNohadaniORobust optimization with simulated annealingJ. Glob. Optim.2010482323334272176910.1007/s10898-009-9496-x1198.90402 – reference: NesterovYLectures on Convex Optimization2018BerlinSpringer1427.90003 – reference: Ying, Y., Wen, L., Lyu, S.: Stochastic online AUC maximization. In: Advances in Neural Information Processing Systems, pp. 451–459 (2016) – reference: Roy, A., Balasubramanian, K., Ghadimi, S., Mohapatra, P.: Escaping saddle-points faster under interpolation-like conditions. In: Advances in Neural Information Processing Systems (2020) – reference: Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: International Conference on Representation Learning (2017) – reference: Luo, L., Ye, H., Huang, Z., Zhang, T.: Stochastic recursive gradient descent ascent for stochastic nonconvex–strongly-concave minimax problems. In: Advances in Neural Information Processing Systems, vol. 33 (2020) – reference: Mertikopoulos, P., Papadimitriou, C., Piliouras, G.: Cycles in adversarial regularized learning. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2703–2717. SIAM (2018) – reference: AudetCHareWDerivative-Free and Blackbox Optimization2017BerlinSpringer10.1007/978-3-319-68913-51391.90001 – reference: Wang, Z., Jegelka, S.: Max-value entropy search for efficient Bayesian optimization. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3627–3635. JMLR.org (2017) – reference: Nouiehed, M., Sanjabi, M., Huang, T., Lee, J., Razaviyayn, M.: Solving a class of non-convex min–max games using iterative first order methods. In: Advances in Neural Information Processing Systems, pp. 14905–14916 (2019) – reference: Pfau, D., Vinyals, O.: Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945 (2016) – reference: Bogunovic, I., Scarlett, J., Jegelka, S., Cevher, V.: Adversarially robust optimization with Gaussian processes. In: Advances in Neural Information Processing Systems, pp. 5760–5770 (2018) – reference: Vaswani, S., Mishkin, A., Laradji, I., Schmidt, M., Gidel, G., Lacoste-Julien, S.: Painless stochastic gradient: interpolation, line-search, and convergence rates. In: Advances in Neural Information Processing Systems, pp. 3727–3740 (2019) – reference: Oliehoek, F.A., Savani, R., Gallego, J., van der Pol, E., Groß, R.: Beyond local Nash equilibria for adversarial networks. arXiv preprint arXiv:1806.07268 (2018) – reference: Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012) – reference: NesterovYSpokoinyVRandom gradient-free minimization of convex functionsFound. Comput. Math.2017172527566362745610.1007/s10208-015-9296-21380.90220 – reference: MoriartyDESchultzACGrefenstetteJJEvolutionary algorithms for reinforcement learningJ. Artif. Intell. Res.19991124127610.1613/jair.6130924.68157 – reference: NesterovYEIntroductory Lectures on Convex Optimization: A Basic Course. Applied Optimization2004BostonKluwer Academic Publishers10.1007/978-1-4419-8853-91086.90045 – reference: Anagnostidis, S., Lucchi, A., Diouane, Y.: Direct-search methods for a class of non-convex min–max games. In: AISTATS (2021) – reference: Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, vol. 6, pp. 583–603. University of California Press (1972) – ident: 1160_CR32 – ident: 1160_CR55 – ident: 1160_CR6 – ident: 1160_CR49 – ident: 1160_CR61 – ident: 1160_CR26 – ident: 1160_CR65 – volume: 23 start-page: 2341 year: 2013 ident: 1160_CR20 publication-title: SIAM J. Optim. doi: 10.1137/120880811 – ident: 1160_CR66 doi: 10.1145/3278721.3278779 – ident: 1160_CR42 – ident: 1160_CR67 doi: 10.1007/978-3-030-60990-0_12 – ident: 1160_CR16 – volume: 48 start-page: 323 issue: 2 year: 2010 ident: 1160_CR9 publication-title: J. Glob. Optim. doi: 10.1007/s10898-009-9496-x – ident: 1160_CR23 – volume: 179 start-page: 1 year: 2018 ident: 1160_CR34 publication-title: Math. Program. – ident: 1160_CR52 – ident: 1160_CR58 – ident: 1160_CR31 – ident: 1160_CR56 – ident: 1160_CR10 – volume-title: Stochastic Games and Applications year: 2003 ident: 1160_CR41 doi: 10.1007/978-94-010-0189-2 – ident: 1160_CR62 doi: 10.1109/BigData.2018.8622525 – ident: 1160_CR3 – ident: 1160_CR27 – ident: 1160_CR35 doi: 10.1137/1.9781611975031.172 – volume-title: Introduction to Derivative-Free Optimization year: 2009 ident: 1160_CR14 doi: 10.1137/1.9780898718768 – ident: 1160_CR17 – volume-title: Lectures on Convex Optimization year: 2018 ident: 1160_CR39 doi: 10.1007/978-3-319-91578-4 – ident: 1160_CR51 – ident: 1160_CR8 – ident: 1160_CR30 – ident: 1160_CR13 doi: 10.1145/3128572.3140448 – ident: 1160_CR63 – volume-title: Derivative-Free and Blackbox Optimization year: 2017 ident: 1160_CR4 doi: 10.1007/978-3-319-68913-5 – ident: 1160_CR24 – ident: 1160_CR28 – volume-title: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization year: 2004 ident: 1160_CR38 doi: 10.1007/978-1-4419-8853-9 – ident: 1160_CR47 – ident: 1160_CR11 doi: 10.1145/3055399.3055403 – ident: 1160_CR44 – ident: 1160_CR21 – volume: 22 start-page: 35 year: 2021 ident: 1160_CR7 publication-title: Found. Comput. Math. doi: 10.1007/s10208-021-09499-8 – volume-title: UCI Machine Learning Repository year: 2017 ident: 1160_CR18 – ident: 1160_CR37 – ident: 1160_CR50 – ident: 1160_CR29 doi: 10.1109/TSP.2020.2986363 – volume: 56 start-page: 1247 issue: 3 year: 2013 ident: 1160_CR48 publication-title: J. Glob. Optim. doi: 10.1007/s10898-012-9951-y – ident: 1160_CR33 – volume: 17 start-page: 527 issue: 2 year: 2017 ident: 1160_CR40 publication-title: Found. Comput. Math. doi: 10.1007/s10208-015-9296-2 – ident: 1160_CR54 – ident: 1160_CR43 doi: 10.1007/978-3-030-31978-6_7 – ident: 1160_CR5 – volume-title: Competitive Markov Decision Processes year: 2012 ident: 1160_CR19 – ident: 1160_CR25 – ident: 1160_CR60 – volume: 73 start-page: 171 issue: 1 year: 2019 ident: 1160_CR45 publication-title: J. Glob. Optim. doi: 10.1007/s10898-018-0688-0 – ident: 1160_CR46 – ident: 1160_CR64 – volume: 11 start-page: 241 year: 1999 ident: 1160_CR36 publication-title: J. Artif. Intell. Res. doi: 10.1613/jair.613 – ident: 1160_CR2 doi: 10.1063/1.5089993 – ident: 1160_CR22 – ident: 1160_CR15 – ident: 1160_CR1 – ident: 1160_CR53 – ident: 1160_CR57 – ident: 1160_CR12 doi: 10.1145/1961189.1961199 – volume: 32 start-page: 10450 year: 2019 ident: 1160_CR59 publication-title: Adv. Neural Inf. Process. Syst. |
SSID | ssj0009852 |
Score | 2.4071 |
Snippet | In this paper, we study zeroth-order algorithms for minimax optimization problems that are nonconvex in one variable and strongly-concave in the other... |
SourceID | gale crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 709 |
SubjectTerms | Algorithms Comparative analysis Computer Science Machine learning Mathematics Mathematics and Statistics Operations Research/Decision Theory Optimization Real Functions |
Title | Zeroth-order algorithms for nonconvex–strongly-concave minimax problems with improved complexities |
URI | https://link.springer.com/article/10.1007/s10898-022-01160-0 |
Volume | 87 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1fS-QwEB9EX-4eTl3vuPU8ycPBKV6gTZs2fVwPV1H06RbUl5ImqS6sq2z3jr03v4Pf0E_iTDb1D4ggFErKNIROk8xM5vcbgB-FyE1cOc0rYTVPq8hxRU2imkusyUVmCZx8fJIdDNLDU3kaQGFNm-3eHkn6lfoZ2E0RHExQKkGcRRwd9SWJvjslcg1E74lqV_k6O1EhJJe4CgeozOt9vNiO2kX55ZGo32n6K_ApmIisN9fpKiy4cQeW2_ILLMzGDnx8xiWIreNHAtamA6tBqmFbgVh6ew3suZugYrin22R6dHE9GU4vrxqGhisbX499Bvrs_vauofj4xeg_xydG_3OMCEiu9IyF8jMNo_AtG_qAhLPM56W7mSdn_QyD_t6f3wc8VFngJkmiKVepzWxhE0vxJVkol6VFjr6lIE-pjpJUFjqJtMArrmSF-5l11mWqqmutyaD5Aos4RPcVmFHWFnUhiTEnRddSW6PRQ1Imskrksu5C3H7s0gQKcqqEMSqfyJNJQSUqqPQKKqMu7Dy-czMn4HhT-ifpsKTZiT0bHUAGOD7iuSp7OVpURJoVd-FXq-YyTNvmjY7X3yf-DT5QXfo5aHEDFqeTv-47Wi_TahOWev3d3RO6758d7W36n_cBcVXpsA |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTxsxEB5VcCgcWgggQkvrQ6VSFUu73pf3GFVFaUs4EQlxsby2NyCFgLIpCjf-A_-QX8KM4-UhVUiV9uLVrGXt-DEznu8bgC-lKExcOc0rYTVPq8hxSU2imkusKURuCZw8OMr7w_T3SXYSQGFNm-3eXkn6nfoZ2E0SHExQKkGcRxwd9WU0BiTN5aHoPVHtSl9nJypFxjPchQNU5t99vDiO2k355ZWoP2kO1uBdMBFZb6HTdXjjJh1435ZfYGE1dmD1GZcgtgaPBKxNB9aDVMP2ArH0tw2wp26KiuGebpPp8ehyej47u2gYGq5scjnxGejz-9u7huLjo_ENxzdGXztGBCQXes5C-ZmGUfiWnfuAhLPM56W7uSdn3YThwc_jH30eqixwkyTRjMvU5ra0iaX4UlZKl6dlgb6lIE-pjpI0K3USaYFPXGUVnmfWWZfLqq61JoNmC5ZwiG4bmJHWlnWZEWNOiq6ltkajhyRNZKUosroLcfuzlQkU5FQJY6yeyJNJQQoVpLyCVNSF74_fXC0IOF6V_ko6VLQ6sWejA8gAx0c8V6pXoEVFpFlxF_ZbNauwbJtXOt75P_HP8LZ_PDhUh7-O_nyAFapRvwAwfoSl2fSv20VLZlZ98hP3Afdj6ZM |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dSxwxEB-KhVIfar0qXu1HHgqtaHA3-5V9PNoeaqv0wQPxJWSTrArnKberXN_6P_Q_7F_SmVxWTyhCYV-yzIawk4-Zyfx-A_ChFIWJK6d5JazmaRU5LqlJVHOJNYXILYGTD4_yvVF6cJKdLKD4fbZ7dyU5xzQQS9Ok3b229e4C8E0SNExQWkGcRxyd9qcpoYFxRo_E4J52V_qaO1EpMp7hjhxgM__u48HR1G3QD69H_akzfAkvgrnIBnP9rsITN-nBSleKgYWV2YPlBV5BbB3ekbE2PVgNUg37FEimt16BPXVTVBL31JtMj8-uphft-WXD0Ihlk6uJz0af_fn1u6FY-dn4J8c3Rt86RmQkl3rGQimahlEol1344ISzzOeou5knal2D0fDr8ec9HioucJMkUctlanNb2sRSrCkrpcvTskA_U5DXVEdJmpU6ibTAJ66yCs8266zLZVXXWpNxsw5LOES3AcxIa8u6zIg9J0U3U1uj0VuSJrJSFFndh7j72coEOnKqijFW90TKpCCFClJeQSrqw_bdN9dzMo5HpT-SDhWtVOzZ6AA4wPER55UaFGhdEYFW3IedTs0qLOHmkY5f_5_4e3j248tQfd8_-rYJz6lc_RzL-AaW2umNe4tGTVu98_P2L1xF7cY |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Zeroth-order+algorithms+for+nonconvex-strongly-concave+minimax+problems+with+improved+complexities&rft.jtitle=Journal+of+global+optimization&rft.au=Wang%2C+Zhongruo&rft.au=Balasubramanian%2C+Krishnakumar&rft.au=Ma%2C+Shiqian&rft.au=Razaviyayn%2C+Meisam&rft.date=2023-11-01&rft.pub=Springer&rft.issn=0925-5001&rft.volume=87&rft.issue=2-4&rft.spage=709&rft_id=info:doi/10.1007%2Fs10898-022-01160-0&rft.externalDocID=A771071621 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-5001&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-5001&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-5001&client=summon |