Bridging machine learning and cryptography in defence against adversarial attacks
In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural n...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.09.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In the last decade, deep learning algorithms have become very popular thanks
to the achieved performance in many machine learning and computer vision tasks.
However, most of the deep learning architectures are vulnerable to so called
adversarial examples. This questions the security of deep neural networks (DNN)
for many security- and trust-sensitive domains. The majority of the proposed
existing adversarial attacks are based on the differentiability of the DNN cost
function.Defence strategies are mostly based on machine learning and signal
processing principles that either try to detect-reject or filter out the
adversarial perturbations and completely neglect the classical cryptographic
component in the defence. In this work, we propose a new defence mechanism
based on the second Kerckhoffs's cryptographic principle which states that the
defence and classification algorithm are supposed to be known, but not the key.
To be compliant with the assumption that the attacker does not have access to
the secret key, we will primarily focus on a gray-box scenario and do not
address a white-box one. More particularly, we assume that the attacker does
not have direct access to the secret block, but (a) he completely knows the
system architecture, (b) he has access to the data used for training and
testing and (c) he can observe the output of the classifier for each given
input. We show empirically that our system is efficient against most famous
state-of-the-art attacks in black-box and gray-box scenarios. |
---|---|
AbstractList | In the last decade, deep learning algorithms have become very popular thanks
to the achieved performance in many machine learning and computer vision tasks.
However, most of the deep learning architectures are vulnerable to so called
adversarial examples. This questions the security of deep neural networks (DNN)
for many security- and trust-sensitive domains. The majority of the proposed
existing adversarial attacks are based on the differentiability of the DNN cost
function.Defence strategies are mostly based on machine learning and signal
processing principles that either try to detect-reject or filter out the
adversarial perturbations and completely neglect the classical cryptographic
component in the defence. In this work, we propose a new defence mechanism
based on the second Kerckhoffs's cryptographic principle which states that the
defence and classification algorithm are supposed to be known, but not the key.
To be compliant with the assumption that the attacker does not have access to
the secret key, we will primarily focus on a gray-box scenario and do not
address a white-box one. More particularly, we assume that the attacker does
not have direct access to the secret block, but (a) he completely knows the
system architecture, (b) he has access to the data used for training and
testing and (c) he can observe the output of the classifier for each given
input. We show empirically that our system is efficient against most famous
state-of-the-art attacks in black-box and gray-box scenarios. |
Author | Taran, Olga Rezaeifar, Shideh Voloshynovskiy, Slava |
Author_xml | – sequence: 1 givenname: Olga surname: Taran fullname: Taran, Olga – sequence: 2 givenname: Shideh surname: Rezaeifar fullname: Rezaeifar, Shideh – sequence: 3 givenname: Slava surname: Voloshynovskiy fullname: Voloshynovskiy, Slava |
BackLink | https://doi.org/10.48550/arXiv.1809.01715$$DView paper in arXiv |
BookMark | eNotz81KxDAUhuEsdKGjF-DK3EBr0uScpksd_IMBEWZfjslpJ9jJlLQM9u5lRlcfvIsPnmtxkQ6JhbjTqrQOQD1Q_onHUjvVlErXGq7E51OOoY-pl3vyu5hYDkw5nQKlIH1exvnQZxp3i4xJBu44eZbUU0zTLCkcOU-UIw2S5pn893QjLjsaJr7935XYvjxv12_F5uP1ff24KQhrKIJDFdBZ9AwYrAdUYMFUUKFvtEVHoVJfAFDrCtFQpwOZuiGjLHPdabMS93-3Z1M75rinvLQnW3u2mV94xEtH |
ContentType | Journal Article |
Copyright | http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
Copyright_xml | – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
DBID | AKY EPD GOX |
DOI | 10.48550/arxiv.1809.01715 |
DatabaseName | arXiv Computer Science arXiv Statistics arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 1809_01715 |
GroupedDBID | AKY EPD GOX |
ID | FETCH-LOGICAL-a675-d860d6846ce56d4c56054532526c91468ad20b555712663af1da379a304ee7f13 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:40:28 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a675-d860d6846ce56d4c56054532526c91468ad20b555712663af1da379a304ee7f13 |
OpenAccessLink | https://arxiv.org/abs/1809.01715 |
ParticipantIDs | arxiv_primary_1809_01715 |
PublicationCentury | 2000 |
PublicationDate | 2018-09-05 |
PublicationDateYYYYMMDD | 2018-09-05 |
PublicationDate_xml | – month: 09 year: 2018 text: 2018-09-05 day: 05 |
PublicationDecade | 2010 |
PublicationYear | 2018 |
Score | 1.7087708 |
SecondaryResourceType | preprint |
Snippet | In the last decade, deep learning algorithms have become very popular thanks
to the achieved performance in many machine learning and computer vision tasks.... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Cryptography and Security Computer Science - Learning Statistics - Machine Learning |
Title | Bridging machine learning and cryptography in defence against adversarial attacks |
URI | https://arxiv.org/abs/1809.01715 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV09T8MwED21nVgQCFD5lAdWQ-zYjjsColRIgJCK1K262E7VgVC1AcG_5-wEwcIa3-JLrPfOufcO4JxKEi-cctyizjjxf83RCsWVld6VqFGle8iHRzN5UfczPesB-9HC4Ppz-dH6A5eby2gudREdXXQf-lLGlq27p1n7czJZcXXxv3HEMdOjPyAx3oHtjt2xq_Z17EIv1HvwfB1FUQQR7DV1LgbWjWpYMCrjmVt_rZrOOJota-ZDFU8bwwUV7ZuGYRyZvMH4oTBsmqiJ34fp-HZ6M-HdJAOORMi5tybzhpDeBW28csQyiLjkUkvjRlH7hF5mpda6EISXOVbCY16MMM9UCEUl8gMY1G91GAITUXVReC8rXypjrCWwERqdLj16tPIQhmn_81VrVjGPqZmn1Bz9v3QMW0QEUh9Epk9g0KzfwymBbVOepYx_A2oJfsQ |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Bridging+machine+learning+and+cryptography+in+defence+against+adversarial+attacks&rft.au=Taran%2C+Olga&rft.au=Rezaeifar%2C+Shideh&rft.au=Voloshynovskiy%2C+Slava&rft.date=2018-09-05&rft_id=info:doi/10.48550%2Farxiv.1809.01715&rft.externalDocID=1809_01715 |