A game-based approximate verification of deep neural networks with provable guarantees

Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversari...

Full description

Saved in:
Bibliographic Details
Published inTheoretical computer science Vol. 807; pp. 298 - 329
Main Authors Wu, Min, Wicker, Matthew, Ruan, Wenjie, Huang, Xiaowei, Kwiatkowska, Marta
Format Journal Article
LanguageEnglish
Published Elsevier B.V 06.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimise the distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytime approach to solve the games, in the sense of approximating the value of a game by monotonically improving its upper and lower bounds. The Monte Carlo tree search algorithm is applied to compute upper bounds for both games, and the Admissible A⁎ and the Alpha-Beta Pruning algorithms are, respectively, used to compute lower bounds for the maximum safety radius and feature robustness games. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarial example crafting algorithms. Furthermore, we show how our framework can be deployed to evaluate pointwise robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
AbstractList Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimise the distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytime approach to solve the games, in the sense of approximating the value of a game by monotonically improving its upper and lower bounds. The Monte Carlo tree search algorithm is applied to compute upper bounds for both games, and the Admissible A⁎ and the Alpha-Beta Pruning algorithms are, respectively, used to compute lower bounds for the maximum safety radius and feature robustness games. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarial example crafting algorithms. Furthermore, we show how our framework can be deployed to evaluate pointwise robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
Author Wu, Min
Wicker, Matthew
Ruan, Wenjie
Huang, Xiaowei
Kwiatkowska, Marta
Author_xml – sequence: 1
  givenname: Min
  surname: Wu
  fullname: Wu, Min
  email: min.wu@cs.ox.ac.uk
  organization: Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
– sequence: 2
  givenname: Matthew
  surname: Wicker
  fullname: Wicker, Matthew
  email: matthew.wicker@cs.ox.ac.uk
  organization: Department of Computer Science, University of Georgia Boyd Research Center, Parks Road D.W. Brooks Drive Athens, GA 30602-7415, USA
– sequence: 3
  givenname: Wenjie
  surname: Ruan
  fullname: Ruan, Wenjie
  email: wenjie.ruan@cs.ox.ac.uk
  organization: Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
– sequence: 4
  givenname: Xiaowei
  surname: Huang
  fullname: Huang, Xiaowei
  email: xiaowei.huang@liverpool.ac.uk
  organization: Department of Computer Science, University of Liverpool, Foundation Building, Brownlow Hill, Liverpool, L69 7ZX, UK
– sequence: 5
  givenname: Marta
  surname: Kwiatkowska
  fullname: Kwiatkowska, Marta
  email: marta.kwiatkowska@cs.ox.ac.uk
  organization: Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
BookMark eNp9kMtOwzAQRS1UJNrCB7DzDyTYiR3HYlVVvKRKbICt5Tjj4pImke228Pe4wIpFZ3M3c0ZzzwxN-qEHhK4pySmh1c0mjybkBaEyJzwnrDpDU1oLmRWFZBM0JSVhWSkFv0CzEDYkDRfVFL0t8FpvIWt0gBbrcfTDp9vqCHgP3llndHRDjweLW4AR97DzuksRD4P_CPjg4jtOzF43HeD1TnvdR4Bwic6t7gJc_eUcvd7fvSwfs9Xzw9NyscpMyUjMDDNtUUtGRcUrwXRdW1IyAVZwzWpTSCgaTWzdNtrKhrSWGm0rwglIytNyOUfi967xQwgerDIu_rwcvXadokQd9aiNSnrUUY8iXCU9iaT_yNGn4v7rJHP7y0CqtHfgVTAOegOt82Ciagd3gv4GPsSBrg
CitedBy_id crossref_primary_10_1007_s00165_021_00548_1
crossref_primary_10_1109_TVT_2024_3394699
crossref_primary_10_1109_TSG_2020_3009401
crossref_primary_10_1007_s10462_024_10824_0
crossref_primary_10_3233_AIC_220128
crossref_primary_10_1109_JIOT_2024_3349381
crossref_primary_10_1007_s40747_022_00790_x
crossref_primary_10_1016_j_ifacol_2024_07_330
crossref_primary_10_1145_3607835
crossref_primary_10_1109_OJCOMS_2021_3112939
crossref_primary_10_1145_3563212
crossref_primary_10_1007_s10994_023_06306_z
crossref_primary_10_1109_MSP_2020_2983666
crossref_primary_10_1007_s10878_021_00827_w
crossref_primary_10_1109_TASE_2023_3334332
crossref_primary_10_1007_s11424_022_0208_7
crossref_primary_10_1093_imamat_hxad027
crossref_primary_10_3390_a16030165
crossref_primary_10_1016_j_jlamp_2023_100941
crossref_primary_10_3390_e24040550
crossref_primary_10_1109_TASE_2024_3412239
crossref_primary_10_3390_robotics10020067
crossref_primary_10_1002_int_23072
crossref_primary_10_1007_s10515_022_00337_x
crossref_primary_10_1109_ACCESS_2020_3048047
crossref_primary_10_1109_TAI_2024_3351798
crossref_primary_10_1038_s41598_021_87557_5
crossref_primary_10_1016_j_asoc_2024_111733
crossref_primary_10_1016_j_tcs_2020_12_002
crossref_primary_10_1109_ACCESS_2020_3024197
crossref_primary_10_1109_TASLP_2023_3302230
crossref_primary_10_1016_j_neucom_2024_127643
crossref_primary_10_1016_j_sysarc_2022_102582
crossref_primary_10_3390_info14070397
crossref_primary_10_1016_j_apenergy_2023_121405
crossref_primary_10_1016_j_cosrev_2020_100270
crossref_primary_10_1016_j_inffus_2023_101805
Cites_doi 10.1142/S1793005708001094
10.1007/s11263-007-0056-x
10.1023/B:VISI.0000029664.99615.94
10.1038/nature14539
10.1109/5.726791
10.1016/j.neunet.2012.02.016
10.1016/j.cviu.2007.09.014
10.1145/3290354
ContentType Journal Article
Copyright 2019 Elsevier B.V.
Copyright_xml – notice: 2019 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.tcs.2019.05.046
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Mathematics
Computer Science
EISSN 1879-2294
EndPage 329
ExternalDocumentID 10_1016_j_tcs_2019_05_046
S0304397519304426
GroupedDBID --K
--M
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
4.4
457
4G.
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDW
AAFTH
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXUO
AAYFN
ABAOU
ABBOA
ABJNI
ABMAC
ABVKL
ABYKQ
ACAZW
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
AEBSH
AEKER
AENEX
AEXQZ
AFKWA
AFTJW
AGUBO
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HVGLF
IHE
IXB
J1W
KOM
LG9
M26
M41
MHUIS
MO0
N9A
O-L
O9-
OAUVE
OK1
OZT
P-8
P-9
P2P
PC.
Q38
ROL
RPZ
SCC
SDF
SDG
SES
SPC
SPCBC
SSV
SSW
SSZ
T5K
TN5
WH7
YNT
ZMT
~G-
29Q
AAEDT
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABEFU
ABFNM
ABWVN
ABXDB
ACNNM
ACRPL
ACVFH
ADCNI
ADMUD
ADNMO
ADVLN
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGHFR
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AXJTR
AZFZN
BNPGV
CITATION
EJD
FGOYB
G-2
HZ~
R2-
RIG
SEW
SSH
TAE
WUQ
XJT
ZY4
ID FETCH-LOGICAL-c340t-c4cd28941765674a88f0347ef75a48c29e2ba0f8dbaf9b0df1caf6050e91588f3
IEDL.DBID IXB
ISSN 0304-3975
IngestDate Thu Apr 24 22:57:12 EDT 2025
Tue Jul 01 03:18:01 EDT 2025
Fri Feb 23 02:49:01 EST 2024
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Two-player game
Adversarial examples
Deep neural networks
Automated verification
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c340t-c4cd28941765674a88f0347ef75a48c29e2ba0f8dbaf9b0df1caf6050e91588f3
OpenAccessLink https://doi.org/10.1016/j.tcs.2019.05.046
PageCount 32
ParticipantIDs crossref_citationtrail_10_1016_j_tcs_2019_05_046
crossref_primary_10_1016_j_tcs_2019_05_046
elsevier_sciencedirect_doi_10_1016_j_tcs_2019_05_046
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-02-06
PublicationDateYYYYMMDD 2020-02-06
PublicationDate_xml – month: 02
  year: 2020
  text: 2020-02-06
  day: 06
PublicationDecade 2020
PublicationTitle Theoretical computer science
PublicationYear 2020
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Sun, Wu, Ruan, Huang, Kwiatkowska, Kroening (br0280) 2018
Carlini, Wagner (br0110) 2017
Goodfellow, Shlens, Szegedy (br0090) 2015
Biggio, Corona, Maiorca, Nelson, Šrndić, Laskov, Giacinto, Roli (br0070) 2013
Carlini, Katz, Barrett, Dill (br0430)
Burg (br0340)
Papernot, McDaniel, Goodfellow, Jha, Celik, Swami (br0450) 2017
Wong, Kolter (br0400) 2018; vol. 80
Singh, Gehr, Püschel, Vechev (br0420) 2019; 3
Hull, Ward, Zakrzewski (br0150) 2002; vol. 6
Lowe (br0180) 2004; 60
Weng, Zhang, Chen, Song, Hsieh, Daniel, Boning, Dhillon (br0320) 2018
Zakrzewski (br0140) 2001
Nexar (br0220)
Ryan, Lin, Miikkulainen (br0020) 1998
Katz, Barrett, Dill, Julian, Kochenderfer (br0120) 2017
Pulina, Tacchella (br0370) 2010
Melis, Demontis, Biggio, Brown, Fumera, Roli (br0360) 2017
Carlsson, Ishkhanov, de Silva, Zomorodian (br0250) 2008; 76
Sermanet, LeCun (br0050) 2011
Wicker, Huang, Kwiatkowska (br0230) 2018
Zhang, Weng, Chen, Hsieh, Daniel (br0330) 2018
Narodytska, Kasiviswanathan (br0460) 2017
Wei, Yen (br0310) 2015; 6
Dahl, Stokes, Deng, Yu (br0010) 2013
LeCun, Bottou, Bengio, Haffner (br0190) 1998; 86
Chaslot, Winands, Uiterwijk, van den Herik, Bouzy (br0290) 2008; 4
Bojarski, Del Testa, Dworakowski, Firner, Flepp, Goyal, Jackel, Monfort, Muller, Zhang (br0030)
Mirman, Gehr, Vechev (br0390) 2018; vol. 80
LeCun, Bengio, Hinton (br0060) 2015; 521
Bay, Ess, Tuytelaars, Van Gool (br0470) 2008; 110
Tjeng, Xiao, Tedrake (br0440)
Huang, Kwiatkowska, Wang, Wu (br0130) 2017
Kocsis, Szepesvári (br0300) 2006
Moosavi-Dezfooli, Fawzi, Fawzi, Frossard (br0350) 2017
Ruan, Wu, Sun, Huang, Kroening, Kwiatkowska (br0170) 2019
Wang, Simoncelli, Bovik (br0240) 2003; vol. 2
Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow, Fergus (br0080) 2014
Krizhevsky, Hinton (br0200) 2009
Gopinath, Katz, Păsăreanu, Barrett (br0380) 2018
Ruan, Huang, Kwiatkowska (br0160) 2018
Lundberg, Lee (br0270) 2017
Stallkamp, Schlipsing, Salmen, Igel (br0210) 2012; 32
Bittel, Kaiser, Teichmann, Thoma (br0040)
Ribeiro, Singh, Guestrin (br0260) 2016
Wong, Schmidt, Metzen, Kolter (br0410) 2018
Papernot, McDaniel, Jha, Fredrikson, Celik, Swami (br0100) 2016
Chaslot (10.1016/j.tcs.2019.05.046_br0290) 2008; 4
Zhang (10.1016/j.tcs.2019.05.046_br0330) 2018
Ruan (10.1016/j.tcs.2019.05.046_br0170) 2019
Ruan (10.1016/j.tcs.2019.05.046_br0160) 2018
Szegedy (10.1016/j.tcs.2019.05.046_br0080) 2014
Papernot (10.1016/j.tcs.2019.05.046_br0100) 2016
Katz (10.1016/j.tcs.2019.05.046_br0120) 2017
Lundberg (10.1016/j.tcs.2019.05.046_br0270) 2017
Carlini (10.1016/j.tcs.2019.05.046_br0430)
Singh (10.1016/j.tcs.2019.05.046_br0420) 2019; 3
Carlini (10.1016/j.tcs.2019.05.046_br0110) 2017
LeCun (10.1016/j.tcs.2019.05.046_br0190) 1998; 86
Sun (10.1016/j.tcs.2019.05.046_br0280) 2018
Goodfellow (10.1016/j.tcs.2019.05.046_br0090) 2015
Bittel (10.1016/j.tcs.2019.05.046_br0040)
LeCun (10.1016/j.tcs.2019.05.046_br0060) 2015; 521
Narodytska (10.1016/j.tcs.2019.05.046_br0460) 2017
Sermanet (10.1016/j.tcs.2019.05.046_br0050) 2011
Nexar (10.1016/j.tcs.2019.05.046_br0220)
Kocsis (10.1016/j.tcs.2019.05.046_br0300) 2006
Ryan (10.1016/j.tcs.2019.05.046_br0020) 1998
Biggio (10.1016/j.tcs.2019.05.046_br0070) 2013
Hull (10.1016/j.tcs.2019.05.046_br0150) 2002; vol. 6
Carlsson (10.1016/j.tcs.2019.05.046_br0250) 2008; 76
Huang (10.1016/j.tcs.2019.05.046_br0130) 2017
Wei (10.1016/j.tcs.2019.05.046_br0310) 2015; 6
Weng (10.1016/j.tcs.2019.05.046_br0320) 2018
Moosavi-Dezfooli (10.1016/j.tcs.2019.05.046_br0350) 2017
Pulina (10.1016/j.tcs.2019.05.046_br0370) 2010
Wong (10.1016/j.tcs.2019.05.046_br0400) 2018; vol. 80
Burg (10.1016/j.tcs.2019.05.046_br0340)
Krizhevsky (10.1016/j.tcs.2019.05.046_br0200) 2009
Dahl (10.1016/j.tcs.2019.05.046_br0010) 2013
Melis (10.1016/j.tcs.2019.05.046_br0360) 2017
Zakrzewski (10.1016/j.tcs.2019.05.046_br0140) 2001
Lowe (10.1016/j.tcs.2019.05.046_br0180) 2004; 60
Wang (10.1016/j.tcs.2019.05.046_br0240) 2003; vol. 2
Bay (10.1016/j.tcs.2019.05.046_br0470) 2008; 110
Bojarski (10.1016/j.tcs.2019.05.046_br0030)
Tjeng (10.1016/j.tcs.2019.05.046_br0440)
Wicker (10.1016/j.tcs.2019.05.046_br0230) 2018
Stallkamp (10.1016/j.tcs.2019.05.046_br0210) 2012; 32
Ribeiro (10.1016/j.tcs.2019.05.046_br0260) 2016
Wong (10.1016/j.tcs.2019.05.046_br0410) 2018
Papernot (10.1016/j.tcs.2019.05.046_br0450) 2017
Gopinath (10.1016/j.tcs.2019.05.046_br0380) 2018
Mirman (10.1016/j.tcs.2019.05.046_br0390) 2018; vol. 80
References_xml – start-page: 109
  year: 2018
  end-page: 119
  ident: br0280
  article-title: Concolic testing for deep neural networks
  publication-title: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering
– start-page: 1135
  year: 2016
  end-page: 1144
  ident: br0260
  article-title: “Why should I trust you?”: explaining the predictions of any classifier
  publication-title: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16
– start-page: 8400
  year: 2018
  end-page: 8409
  ident: br0410
  article-title: Scaling provable adversarial defenses
  publication-title: Advances in Neural Information Processing Systems 31
– start-page: 5273
  year: 2018
  end-page: 5282
  ident: br0320
  article-title: Towards fast computation of certified robustness for ReLU networks
  publication-title: International Conference on Machine Learning
– year: 2015
  ident: br0090
  article-title: Explaining and harnessing adversarial examples
  publication-title: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings
– volume: vol. 80
  start-page: 5286
  year: 2018
  end-page: 5295
  ident: br0400
  article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope
  publication-title: Proceedings of the 35th International Conference on Machine Learning
– volume: 60
  start-page: 91
  year: 2004
  end-page: 110
  ident: br0180
  article-title: Distinctive image features from scale-invariant keypoints
  publication-title: Int. J. Comput. Vis.
– start-page: 4765
  year: 2017
  end-page: 4774
  ident: br0270
  article-title: A unified approach to interpreting model predictions
  publication-title: Advances in Neural Information Processing Systems
– volume: 6
  start-page: 77
  year: 2015
  end-page: 86
  ident: br0310
  article-title: Superpixels generating from the pixel-based k-means clustering
  publication-title: J. Multimedia Process. Technol.
– start-page: 2651
  year: 2018
  end-page: 2659
  ident: br0160
  article-title: Reachability analysis of deep neural networks with provable guarantees
  publication-title: Proceedings of the 27th International Joint Conference on Artificial Intelligence
– volume: 521
  start-page: 436
  year: 2015
  ident: br0060
  article-title: Deep learning
  publication-title: Nature
– start-page: 372
  year: 2016
  end-page: 387
  ident: br0100
  article-title: The limitations of deep learning in adversarial settings
  publication-title: European Symposium on Security and Privacy (EuroS&P)
– start-page: 3
  year: 2018
  end-page: 19
  ident: br0380
  article-title: Deepsafe: a data-driven approach for assessing robustness of neural networks
  publication-title: International Symposium on Automated Technology for Verification and Analysis
– ident: br0430
  article-title: Provably minimally-distorted adversarial examples
– start-page: 3
  year: 2017
  end-page: 29
  ident: br0130
  article-title: Safety verification of deep neural networks
  publication-title: Computer Aided Verification
– volume: 3
  start-page: 41
  year: 2019
  ident: br0420
  article-title: An abstract domain for certifying neural networks
  publication-title: Proc. ACM Program. Lang.
– start-page: 943
  year: 1998
  end-page: 949
  ident: br0020
  article-title: Intrusion detection with neural networks
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– start-page: 282
  year: 2006
  end-page: 293
  ident: br0300
  article-title: Bandit based Monte-Carlo planning
  publication-title: European Conference on Machine Learning
– start-page: 751
  year: 2017
  end-page: 759
  ident: br0360
  article-title: Is deep learning safe for robot vision? Adversarial examples against the icub humanoid
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– ident: br0440
  article-title: Evaluating robustness of neural networks with mixed integer programming
– volume: 86
  start-page: 2278
  year: 1998
  end-page: 2324
  ident: br0190
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
– start-page: 39
  year: 2017
  end-page: 57
  ident: br0110
  article-title: Towards evaluating the robustness of neural networks
  publication-title: Symposium on Security and Privacy (SP)
– start-page: 97
  year: 2017
  end-page: 117
  ident: br0120
  article-title: Reluplex: an efficient SMT solver for verifying deep neural networks
  publication-title: Computer Aided Verification
– start-page: 2809
  year: 2011
  end-page: 2813
  ident: br0050
  article-title: Traffic sign recognition with multi-scale convolutional networks
  publication-title: International Joint Conference on Neural Networks
– start-page: 1765
  year: 2017
  end-page: 1773
  ident: br0350
  article-title: Universal adversarial perturbations
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2019
  ident: br0170
  article-title: Global robustness evaluation of deep neural networks with provable guarantees for the Hamming distance
  publication-title: International Joint Conference on Artificial Intelligence
– start-page: 1310
  year: 2017
  end-page: 1318
  ident: br0460
  article-title: Simple black-box adversarial attacks on deep neural networks
  publication-title: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
– volume: vol. 80
  start-page: 3578
  year: 2018
  end-page: 3586
  ident: br0390
  article-title: Differentiable abstract interpretation for provably robust neural networks
  publication-title: Proceedings of the 35th International Conference on Machine Learning
– start-page: 1657
  year: 2001
  end-page: 1662
  ident: br0140
  article-title: Verification of a trained neural network accuracy
  publication-title: IJCNN'01, International Joint Conference on Neural Networks, Proceedings (Cat. No. 01CH37222), vol. 3
– start-page: 3422
  year: 2013
  end-page: 3426
  ident: br0010
  article-title: Large-scale malware classification using random projections and neural networks
  publication-title: International Conference on Acoustics, Speech and Signal Processing
– volume: 32
  start-page: 323
  year: 2012
  end-page: 332
  ident: br0210
  article-title: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition
  publication-title: Neural Netw.
– volume: 4
  start-page: 343
  year: 2008
  end-page: 359
  ident: br0290
  article-title: Progressive strategies for Monte-Carlo tree search
  publication-title: New Math. Nat. Comput.
– start-page: 243
  year: 2010
  end-page: 257
  ident: br0370
  article-title: An abstraction-refinement approach to verification of artificial neural networks
  publication-title: International Conference on Computer Aided Verification
– start-page: 387
  year: 2013
  end-page: 402
  ident: br0070
  article-title: Evasion attacks against machine learning at test time
  publication-title: Joint European Conference on Machine Learning and Knowledge Discovery in Databases
– volume: vol. 6
  start-page: 4789
  year: 2002
  end-page: 4794
  ident: br0150
  article-title: Verification and validation of neural networks for safety-critical applications
  publication-title: Proceedings of the 2002 American Control Conference (IEEE Cat. No. CH37301)
– ident: br0340
  article-title: Deep learning traffic lights model for Nexar competition
– volume: 76
  start-page: 1
  year: 2008
  end-page: 12
  ident: br0250
  article-title: On the local behavior of spaces of natural images
  publication-title: Int. J. Comput. Vis.
– start-page: 408
  year: 2018
  end-page: 426
  ident: br0230
  article-title: Feature-guided black-box safety testing of deep neural networks
  publication-title: International Conference on Tools and Algorithms for the Construction and Analysis of Systems
– volume: 110
  start-page: 346
  year: 2008
  end-page: 359
  ident: br0470
  article-title: Speeded-up robust features (SURF)
  publication-title: Comput. Vis. Image Underst.
– ident: br0030
  article-title: End to end learning for self-driving cars
– start-page: 506
  year: 2017
  end-page: 519
  ident: br0450
  article-title: Practical black-box attacks against machine learning
  publication-title: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security
– year: 2014
  ident: br0080
  article-title: Intriguing properties of neural networks
  publication-title: 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings
– start-page: 4939
  year: 2018
  end-page: 4948
  ident: br0330
  article-title: Efficient neural network robustness certification with general activation functions
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– volume: vol. 2
  start-page: 1398
  year: 2003
  end-page: 1402
  ident: br0240
  article-title: Multiscale structural similarity for image quality assessment
  publication-title: The Thirty-Seventh Asilomar Conference on Signals
– ident: br0040
  article-title: Pixel-wise segmentation of street with neural networks
– ident: br0220
  article-title: Using deep learning for traffic light recognition
– year: 2009
  ident: br0200
  article-title: Learning Multiple Layers of Features from Tiny Images
– start-page: 408
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0230
  article-title: Feature-guided black-box safety testing of deep neural networks
– volume: vol. 2
  start-page: 1398
  year: 2003
  ident: 10.1016/j.tcs.2019.05.046_br0240
  article-title: Multiscale structural similarity for image quality assessment
– start-page: 4765
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0270
  article-title: A unified approach to interpreting model predictions
– start-page: 1135
  year: 2016
  ident: 10.1016/j.tcs.2019.05.046_br0260
  article-title: “Why should I trust you?”: explaining the predictions of any classifier
– volume: 4
  start-page: 343
  issue: 3
  year: 2008
  ident: 10.1016/j.tcs.2019.05.046_br0290
  article-title: Progressive strategies for Monte-Carlo tree search
  publication-title: New Math. Nat. Comput.
  doi: 10.1142/S1793005708001094
– start-page: 5273
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0320
  article-title: Towards fast computation of certified robustness for ReLU networks
– volume: 76
  start-page: 1
  issue: 1
  year: 2008
  ident: 10.1016/j.tcs.2019.05.046_br0250
  article-title: On the local behavior of spaces of natural images
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-007-0056-x
– start-page: 39
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0110
  article-title: Towards evaluating the robustness of neural networks
– start-page: 943
  year: 1998
  ident: 10.1016/j.tcs.2019.05.046_br0020
  article-title: Intrusion detection with neural networks
– start-page: 109
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0280
  article-title: Concolic testing for deep neural networks
– start-page: 1310
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0460
  article-title: Simple black-box adversarial attacks on deep neural networks
– volume: vol. 80
  start-page: 5286
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0400
  article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope
– ident: 10.1016/j.tcs.2019.05.046_br0340
– volume: 60
  start-page: 91
  issue: 2
  year: 2004
  ident: 10.1016/j.tcs.2019.05.046_br0180
  article-title: Distinctive image features from scale-invariant keypoints
  publication-title: Int. J. Comput. Vis.
  doi: 10.1023/B:VISI.0000029664.99615.94
– start-page: 1765
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0350
  article-title: Universal adversarial perturbations
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 10.1016/j.tcs.2019.05.046_br0060
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– start-page: 387
  year: 2013
  ident: 10.1016/j.tcs.2019.05.046_br0070
  article-title: Evasion attacks against machine learning at test time
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 10.1016/j.tcs.2019.05.046_br0190
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– start-page: 751
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0360
  article-title: Is deep learning safe for robot vision? Adversarial examples against the icub humanoid
– year: 2014
  ident: 10.1016/j.tcs.2019.05.046_br0080
  article-title: Intriguing properties of neural networks
– start-page: 506
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0450
  article-title: Practical black-box attacks against machine learning
– volume: 32
  start-page: 323
  year: 2012
  ident: 10.1016/j.tcs.2019.05.046_br0210
  article-title: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition
  publication-title: Neural Netw.
  doi: 10.1016/j.neunet.2012.02.016
– year: 2019
  ident: 10.1016/j.tcs.2019.05.046_br0170
  article-title: Global robustness evaluation of deep neural networks with provable guarantees for the Hamming distance
– ident: 10.1016/j.tcs.2019.05.046_br0030
– year: 2015
  ident: 10.1016/j.tcs.2019.05.046_br0090
  article-title: Explaining and harnessing adversarial examples
– start-page: 372
  year: 2016
  ident: 10.1016/j.tcs.2019.05.046_br0100
  article-title: The limitations of deep learning in adversarial settings
– volume: 110
  start-page: 346
  issue: 3
  year: 2008
  ident: 10.1016/j.tcs.2019.05.046_br0470
  article-title: Speeded-up robust features (SURF)
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2007.09.014
– start-page: 3422
  year: 2013
  ident: 10.1016/j.tcs.2019.05.046_br0010
  article-title: Large-scale malware classification using random projections and neural networks
– ident: 10.1016/j.tcs.2019.05.046_br0040
– volume: 3
  start-page: 41
  year: 2019
  ident: 10.1016/j.tcs.2019.05.046_br0420
  article-title: An abstract domain for certifying neural networks
  publication-title: Proc. ACM Program. Lang.
  doi: 10.1145/3290354
– start-page: 1657
  year: 2001
  ident: 10.1016/j.tcs.2019.05.046_br0140
  article-title: Verification of a trained neural network accuracy
– start-page: 243
  year: 2010
  ident: 10.1016/j.tcs.2019.05.046_br0370
  article-title: An abstraction-refinement approach to verification of artificial neural networks
– year: 2009
  ident: 10.1016/j.tcs.2019.05.046_br0200
– start-page: 2809
  year: 2011
  ident: 10.1016/j.tcs.2019.05.046_br0050
  article-title: Traffic sign recognition with multi-scale convolutional networks
– start-page: 2651
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0160
  article-title: Reachability analysis of deep neural networks with provable guarantees
– ident: 10.1016/j.tcs.2019.05.046_br0440
– start-page: 282
  year: 2006
  ident: 10.1016/j.tcs.2019.05.046_br0300
  article-title: Bandit based Monte-Carlo planning
– volume: 6
  start-page: 77
  issue: 3
  year: 2015
  ident: 10.1016/j.tcs.2019.05.046_br0310
  article-title: Superpixels generating from the pixel-based k-means clustering
  publication-title: J. Multimedia Process. Technol.
– start-page: 3
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0380
  article-title: Deepsafe: a data-driven approach for assessing robustness of neural networks
– ident: 10.1016/j.tcs.2019.05.046_br0220
– start-page: 97
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0120
  article-title: Reluplex: an efficient SMT solver for verifying deep neural networks
– volume: vol. 6
  start-page: 4789
  year: 2002
  ident: 10.1016/j.tcs.2019.05.046_br0150
  article-title: Verification and validation of neural networks for safety-critical applications
– start-page: 4939
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0330
  article-title: Efficient neural network robustness certification with general activation functions
– volume: vol. 80
  start-page: 3578
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0390
  article-title: Differentiable abstract interpretation for provably robust neural networks
– start-page: 3
  year: 2017
  ident: 10.1016/j.tcs.2019.05.046_br0130
  article-title: Safety verification of deep neural networks
– ident: 10.1016/j.tcs.2019.05.046_br0430
– start-page: 8400
  year: 2018
  ident: 10.1016/j.tcs.2019.05.046_br0410
  article-title: Scaling provable adversarial defenses
SSID ssj0000576
Score 2.5768013
Snippet Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 298
SubjectTerms Adversarial examples
Automated verification
Deep neural networks
Two-player game
Title A game-based approximate verification of deep neural networks with provable guarantees
URI https://dx.doi.org/10.1016/j.tcs.2019.05.046
Volume 807
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT8MwDLZ4XODAY4B4TjlwQipLt3RJjwOBBgguMLRblaTJNARjYkPixG_HTtsBEnDg2iZq5Tif7dj5DHBoUqPzJtdRbPMkEuiCR8YnPlLKKiljL9qh3dv1TbvbE5f9pD8Hp9VdGCqrLLG_wPSA1uWTRinNxng4bNxSUg-tKbkgXKChQRxuCRUu8fVPPtE4kUW-kjIAOLrKbIYar6klxu44DeSd5AP_ZJu-2JvzNVgpHUXWKf5lHebcqAarVRMGVu7JGixfz4hXJxtw32ED_eQiMk45C4Thb0N86RiqLFUFhYVgz57lzo0ZsVniR0ZFLfiE0akso1MGulDFBqg-JHg32YTe-dndaTcqWydEtiX4NLLC5hhKiViivyaFVsrzlpDOy0QLZZupaxrNvcqN9qnhuY-t9hjZcJfGCQ5ubcHC6HnktoFJZbnw0rQdxpJciTRW2mBYKAMaCLEDvBJaZktecWpv8ZhVBWQPGco5IzlnPMlQzjtwNJsyLkg1_hosqpXIvmlGhqD_-7Td_03bg6UmRdRUl93eh4Xpy6s7QLdjauowf_we12Gxc3HVvakHLfsAgLTXxQ
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT8MwDLYGHIADb8SbHDghVUu3dEmPgJgGbFwYaLcoSRM0BGNiQ-LnY_fBQwIOXNtErRzH_hw7nwGObGpN1uAmil2WRAIheGRDEiKlnJIyDqKVt3vrXbc6t-JykAxqcFbdhaGyytL2FzY9t9blk3opzfp4OKzfUFIPvSlBEC7Q0czAHKIBSf0bLgann-Y4kUXCklIAOLxKbeZFXlNHlN1xmrN3Egj-yTl9cTjtFVgqkSI7KX5mFWp-tAbLVRcGVm7KNVjsfTCvTtbh7oTdmycfkXfKWM4Y_jbEl56hzlJZUL4S7DmwzPsxIzpL_MioKAafMDqWZXTMQDeq2D3qD0neTzbgtn3eP-tEZe-EyDUFn0ZOuAxjKRFLBGxSGKUCbwrpg0yMUK6R-oY1PKjMmpBanoXYmYChDfdpnODg5ibMjp5HfguYVI6LIG3LYzDJlUhjZSzGhTI3B0JsA6-Epl1JLE79LR51VUH2oFHOmuSseaJRzttw_DFlXLBq_DVYVCuhv6mGRqv_-7Sd_007hPlOv9fV3Yvrq11YaFB4TUXarT2Ynb68-n3EIFN7kOvYO4pv2Fg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+game-based+approximate+verification+of+deep+neural+networks+with+provable+guarantees&rft.jtitle=Theoretical+computer+science&rft.au=Wu%2C+Min&rft.au=Wicker%2C+Matthew&rft.au=Ruan%2C+Wenjie&rft.au=Huang%2C+Xiaowei&rft.date=2020-02-06&rft.issn=0304-3975&rft.volume=807&rft.spage=298&rft.epage=329&rft_id=info:doi/10.1016%2Fj.tcs.2019.05.046&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_tcs_2019_05_046
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0304-3975&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0304-3975&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0304-3975&client=summon