Feature-filter: Detecting adversarial examples by filtering out recessive features
Deep neural networks (DNNs) have achieved state-of-the-art performance in numerous tasks involving complex analysis of raw data, such as self-driving systems and biometric recognition systems. However, recent works have shown that DNNs are under threat from adversarial example attacks. The adversary...
Saved in:
Published in | Applied soft computing Vol. 124; p. 109027 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.07.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep neural networks (DNNs) have achieved state-of-the-art performance in numerous tasks involving complex analysis of raw data, such as self-driving systems and biometric recognition systems. However, recent works have shown that DNNs are under threat from adversarial example attacks. The adversary can easily change the outputs of DNNs by adding small well-designed perturbations to inputs. Adversarial example detection is fundamental for robust DNN-based services. From a human-centric perspective, this paper divides image features into dominant features comprehensible to humans and recessive features incomprehensible to humans yet exploited by DNNs. Based on this perspective, the paper proposes a new viewpoint that imperceptible adversarial examples are the product of recessive features misleading neural networks, and that the adversarial attack enriches these recessive features. The imperceptibility of the adversarial examples indicates that the perturbations enrich recessive features but hardly affect dominant features. Therefore, adversarial examples are sensitive to filtering out recessive features, while benign examples are immune to such operations. Inspired by this idea, we propose a label-only adversarial detector that is referred to as a feature-filter. The feature-filter utilizes the discrete cosine transform (DCT) to approximately separate recessive features from dominant features and obtain a filtered image. A comprehensive user study demonstrates that the DCT-based filter can reliably filter out recessive features from the test image. By comparing only the DNN’s prediction labels on the input and its filtered version, the feature-filter can detect imperceptible adversarial examples in real time with high accuracy and few false-positives.
•We reveal the reason for the existence of imperceptible adversarial examples.•We propose a label-only approach to detect imperceptible adversarial examples.•We design a DCT-based filter to reliably filter out recessive features. |
---|---|
AbstractList | Deep neural networks (DNNs) have achieved state-of-the-art performance in numerous tasks involving complex analysis of raw data, such as self-driving systems and biometric recognition systems. However, recent works have shown that DNNs are under threat from adversarial example attacks. The adversary can easily change the outputs of DNNs by adding small well-designed perturbations to inputs. Adversarial example detection is fundamental for robust DNN-based services. From a human-centric perspective, this paper divides image features into dominant features comprehensible to humans and recessive features incomprehensible to humans yet exploited by DNNs. Based on this perspective, the paper proposes a new viewpoint that imperceptible adversarial examples are the product of recessive features misleading neural networks, and that the adversarial attack enriches these recessive features. The imperceptibility of the adversarial examples indicates that the perturbations enrich recessive features but hardly affect dominant features. Therefore, adversarial examples are sensitive to filtering out recessive features, while benign examples are immune to such operations. Inspired by this idea, we propose a label-only adversarial detector that is referred to as a feature-filter. The feature-filter utilizes the discrete cosine transform (DCT) to approximately separate recessive features from dominant features and obtain a filtered image. A comprehensive user study demonstrates that the DCT-based filter can reliably filter out recessive features from the test image. By comparing only the DNN’s prediction labels on the input and its filtered version, the feature-filter can detect imperceptible adversarial examples in real time with high accuracy and few false-positives.
•We reveal the reason for the existence of imperceptible adversarial examples.•We propose a label-only approach to detect imperceptible adversarial examples.•We design a DCT-based filter to reliably filter out recessive features. |
ArticleNumber | 109027 |
Author | Liu, Peng Guo, Jiabao Liu, Hui Ji, Minzhi Peng, Yuefeng Zhao, Bo |
Author_xml | – sequence: 1 givenname: Hui orcidid: 0000-0003-1345-5736 surname: Liu fullname: Liu, Hui organization: School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China – sequence: 2 givenname: Bo orcidid: 0000-0003-4307-9380 surname: Zhao fullname: Zhao, Bo email: zhaobo@whu.edu.cn organization: School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China – sequence: 3 givenname: Minzhi surname: Ji fullname: Ji, Minzhi organization: School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China – sequence: 4 givenname: Yuefeng surname: Peng fullname: Peng, Yuefeng organization: School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China – sequence: 5 givenname: Jiabao surname: Guo fullname: Guo, Jiabao organization: School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China – sequence: 6 givenname: Peng surname: Liu fullname: Liu, Peng email: pliu@ist.psu.edu organization: College of Information Sciences and Technology, Pennsylvania State University, PA, 16801, United States |
BookMark | eNp9kE1LAzEQhoNUsK3-AU_7B7Ymsx9JxItUq0JBED2HaXZWUra7JUmL_ffusp489DTD8D4vzDNjk7ZribFbwReCi_Juu8DQ2QVwgP6gOcgLNhVKQqpLJSb9XpQqzXVeXrFZCFveQxrUlH2sCOPBU1q7JpK_T54oko2u_U6wOpIP6B02Cf3gbt9QSDanZEwOie4QE0-WQnBHSuqxKVyzyxqbQDd_c86-Vs-fy9d0_f7ytnxcpzbjPKZQiVwiKShK4CSVJs0tAUiOGxDClhurMiwqREJbSUkIlEOeV4UuCwlZNmcw9lrfheCpNnvvduhPRnAzWDFbM1gxgxUzWukh9Q-yLmJ0XRs9uuY8-jCi1D91dORNsI5aS5XrJURTde4c_gvpt4D0 |
CitedBy_id | crossref_primary_10_1109_MIS_2024_3378923 crossref_primary_10_3390_e25020335 crossref_primary_10_1016_j_asoc_2023_110317 |
Cites_doi | 10.1109/TII.2020.3024643 10.1016/j.neucom.2018.04.027 10.1145/3464423 10.1145/3472290 10.1142/S021800142154001X 10.1109/THMS.2019.2913672 10.1109/ACCESS.2020.2974525 10.1109/TEVC.2019.2890858 10.1007/s11263-015-0816-y 10.1109/TNNLS.2018.2886017 10.1016/j.neucom.2021.10.082 10.1016/j.cose.2021.102547 10.1109/TMM.2021.3050057 |
ContentType | Journal Article |
Copyright | 2022 Elsevier B.V. |
Copyright_xml | – notice: 2022 Elsevier B.V. |
DBID | AAYXX CITATION |
DOI | 10.1016/j.asoc.2022.109027 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1872-9681 |
ExternalDocumentID | 10_1016_j_asoc_2022_109027 S1568494622003374 |
GroupedDBID | --K --M .DC .~1 0R~ 1B1 1~. 1~5 23M 4.4 457 4G. 53G 5GY 5VS 6J9 7-5 71M 8P~ AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABFRF ABJNI ABMAC ABXDB ABYKQ ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HVGLF HZ~ IHE J1W JJJVA KOM M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SDF SDG SES SEW SPC SPCBC SST SSV SSZ T5K UHS UNMZH ~G- AATTM AAXKI AAYWO AAYXX ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH |
ID | FETCH-LOGICAL-c300t-2d147ae825620e789e90ce2270ab211c6bc83a5daaeacd77ea2e4244d59657233 |
IEDL.DBID | .~1 |
ISSN | 1568-4946 |
IngestDate | Tue Jul 01 01:50:15 EDT 2025 Thu Apr 24 22:52:26 EDT 2025 Fri Feb 23 02:40:15 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Adversarial example Dominant features Deep neural networks Recessive features Discrete cosine transform |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c300t-2d147ae825620e789e90ce2270ab211c6bc83a5daaeacd77ea2e4244d59657233 |
ORCID | 0000-0003-4307-9380 0000-0003-1345-5736 |
ParticipantIDs | crossref_primary_10_1016_j_asoc_2022_109027 crossref_citationtrail_10_1016_j_asoc_2022_109027 elsevier_sciencedirect_doi_10_1016_j_asoc_2022_109027 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | July 2022 2022-07-00 |
PublicationDateYYYYMMDD | 2022-07-01 |
PublicationDate_xml | – month: 07 year: 2022 text: July 2022 |
PublicationDecade | 2020 |
PublicationTitle | Applied soft computing |
PublicationYear | 2022 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Gu, Chung, Chignell, Valaee, Zhou, Liu (b3) 2022; 54 Fernando, Gammulle, Denman, Sridharan, Fookes (b4) 2022; 54 Goodfellow, Shlens, Szegedy (b8) 2014 Nesti, Biondi, Buttazzo (b42) 2021 Cohen, Sapiro, Giryes (b33) 2020 Wang, Zhao, Yin, Luo, Zheng, Shi, Jha (b12) 2022; 24 Feinman, Curtin, Shintre, Gardner (b35) 2017 Xu, Evans, Qi (b38) 2018 Das, Shanbhogue, Chen, Hohman, Chen, Kounavis, Chau (b29) 2017 Tian, Yang, Cai (b39) 2018 Blanco-Gonzalo, Miguel-Hurtado, Lunerti, Guest, Corsetti, Ellavarason, Sanchez-Reillo (b5) 2019; 49 Crecchi, Melis, Sotgiu, Bacciu, Biggio (b14) 2022; 470 Mahloujifar, Diochnos, Mahmoody (b25) 2018 Yang, Chen, Hsieh, Wang, Jordan (b34) 2020 Shaham, Yamada, Negahban (b17) 2018; 307 Moosavi-Dezfooli, Fawzi, Frossard (b10) 2016 Bubeck, Price, Razenshteyn (b23) 2018 Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, Fei-Fei (b46) 2015; 115 Ilyas, Santurkar, Tsipras, Engstrom, Tran, Madry (b26) 2019 Moosavi-Dezfooli, Fawzi, Fawzi, Frossard (b11) 2017 Bai, Luo, Zhao, Wen, Wang (b30) 2021 Papernot, McDaniel, Sinha, Wellman (b31) 2018 Cohen, Sapiro, Giryes (b22) 2020 Szegedy, Vanhoucke, Ioffe, Shlens, Wojna (b1) 2016 Kantaros, Carpenter, Park, Ivanov, Jang, Lee, Weimer (b40) 2020 Tram, Kurakin, Papernot, Goodfellow, Boneh, McDaniel (b19) 2018 Yuan, He, Zhu, Li (b9) 2019; 30 Shafahi (b24) 2019 Rozsa, Rudd, Boult (b13) 2016 Dai, Cheng, Li (b47) 2021; 35 Laidlaw, Singla, Feizi (b16) 2020 Yahya, Hassan, Younis, Shafique (b27) 2020; 8 Xu, Zhang, Li, Wang, Yang, Shen (b15) 2021; 17 Papernot, McDaniel, Wu, Jha, Swami (b32) 2016 Sperl, Kao, Chen, Bottinger (b37) 2020 Su, Vargas, Sakurai (b44) 2019; 23 Carlini, Wagner (b48) 2017 Carlini, Wagner (b21) 2017 Krizhevsky, Hinton (b45) 2009 Bahat, Irani, Shakhnarovich (b41) 2019 Dziugaite, Ghahramani, Roy (b28) 2018 Liu, Zhao, Peng, Li, Liu (b43) 2022 Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow, Fergus (b7) 2013 Kurakin, Goodfellow, Bengio (b18) 2017 Papernot, McDaniel, Wu, Jha, Swami (b20) 2016 Huang, Liu, Maaten, Weinberger (b2) 2017 Obaidat, Sridhar, Pham, Phung (b6) 2022; 113 Ma, Liu, Tao, Lee, Zhang (b36) 2019 Papernot (10.1016/j.asoc.2022.109027_b32) 2016 Cohen (10.1016/j.asoc.2022.109027_b22) 2020 Kurakin (10.1016/j.asoc.2022.109027_b18) 2017 Yang (10.1016/j.asoc.2022.109027_b34) 2020 Yahya (10.1016/j.asoc.2022.109027_b27) 2020; 8 Carlini (10.1016/j.asoc.2022.109027_b48) 2017 Papernot (10.1016/j.asoc.2022.109027_b20) 2016 Moosavi-Dezfooli (10.1016/j.asoc.2022.109027_b10) 2016 Carlini (10.1016/j.asoc.2022.109027_b21) 2017 Ma (10.1016/j.asoc.2022.109027_b36) 2019 Goodfellow (10.1016/j.asoc.2022.109027_b8) 2014 Moosavi-Dezfooli (10.1016/j.asoc.2022.109027_b11) 2017 Crecchi (10.1016/j.asoc.2022.109027_b14) 2022; 470 Fernando (10.1016/j.asoc.2022.109027_b4) 2022; 54 Wang (10.1016/j.asoc.2022.109027_b12) 2022; 24 Yuan (10.1016/j.asoc.2022.109027_b9) 2019; 30 Xu (10.1016/j.asoc.2022.109027_b38) 2018 Dziugaite (10.1016/j.asoc.2022.109027_b28) 2018 Blanco-Gonzalo (10.1016/j.asoc.2022.109027_b5) 2019; 49 Xu (10.1016/j.asoc.2022.109027_b15) 2021; 17 Tian (10.1016/j.asoc.2022.109027_b39) 2018 Liu (10.1016/j.asoc.2022.109027_b43) 2022 Russakovsky (10.1016/j.asoc.2022.109027_b46) 2015; 115 Dai (10.1016/j.asoc.2022.109027_b47) 2021; 35 Szegedy (10.1016/j.asoc.2022.109027_b7) 2013 Mahloujifar (10.1016/j.asoc.2022.109027_b25) 2018 Ilyas (10.1016/j.asoc.2022.109027_b26) 2019 Bai (10.1016/j.asoc.2022.109027_b30) 2021 Sperl (10.1016/j.asoc.2022.109027_b37) 2020 Obaidat (10.1016/j.asoc.2022.109027_b6) 2022; 113 Bahat (10.1016/j.asoc.2022.109027_b41) 2019 Laidlaw (10.1016/j.asoc.2022.109027_b16) 2020 Szegedy (10.1016/j.asoc.2022.109027_b1) 2016 Papernot (10.1016/j.asoc.2022.109027_b31) 2018 Cohen (10.1016/j.asoc.2022.109027_b33) 2020 Krizhevsky (10.1016/j.asoc.2022.109027_b45) 2009 Gu (10.1016/j.asoc.2022.109027_b3) 2022; 54 Bubeck (10.1016/j.asoc.2022.109027_b23) 2018 Rozsa (10.1016/j.asoc.2022.109027_b13) 2016 Tram (10.1016/j.asoc.2022.109027_b19) 2018 Das (10.1016/j.asoc.2022.109027_b29) 2017 Feinman (10.1016/j.asoc.2022.109027_b35) 2017 Nesti (10.1016/j.asoc.2022.109027_b42) 2021 Su (10.1016/j.asoc.2022.109027_b44) 2019; 23 Huang (10.1016/j.asoc.2022.109027_b2) 2017 Shafahi (10.1016/j.asoc.2022.109027_b24) 2019 Shaham (10.1016/j.asoc.2022.109027_b17) 2018; 307 Kantaros (10.1016/j.asoc.2022.109027_b40) 2020 |
References_xml | – volume: 17 start-page: 4117 year: 2021 end-page: 4126 ident: b15 article-title: Adversarial attack against urban scene segmentation for autonomous vehicles publication-title: IEEE Trans. Ind. Inf. – year: 2021 ident: b42 article-title: Detecting adversarial examples by input transformations, defense perturbations, and voting publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 24 start-page: 230 year: 2022 end-page: 244 ident: b12 article-title: SmsNet: A new deep convolutional neural network model for adversarial example detection publication-title: IEEE Trans. Multimedia – start-page: 6639 year: 2020 end-page: 6647 ident: b34 article-title: ML-LOO: detecting adversarial examples with feature attribution publication-title: AAAI Conference on Artificial Intelligence (AAAI) – year: 2017 ident: b18 article-title: Adversarial machine learning at scale – year: 2016 ident: b20 article-title: Distillation as a defense to adversarial perturbations against deep neural networks publication-title: IEEE Symposium on Security and Privacy (SP) – year: 2019 ident: b26 article-title: Adversarial examples are not bugs, they are features – start-page: 1 year: 2020 end-page: 21 ident: b37 article-title: DLA: dense-layer-analysis for adversarial example detection publication-title: IEEE Symposium on Security and Privacy (EuroS & P) – start-page: 86 year: 2017 end-page: 94 ident: b11 article-title: Universal adversarial perturbations publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – start-page: 14453 year: 2020 end-page: 14462 ident: b33 article-title: Detecting adversarial samples using influence functions and nearest neighbors publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – start-page: 4700 year: 2017 end-page: 4708 ident: b2 article-title: Densely connected convolutional networks publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – year: 2020 ident: b16 article-title: Perceptual adversarial robustness: Defense against unseen threat models – year: 2017 ident: b21 article-title: Towards evaluating the robustness of neural networks publication-title: IEEE Symposium on Security and Privacy (SP) – volume: 115 start-page: 211 year: 2015 end-page: 252 ident: b46 article-title: ImageNet large scale visual recognition challenge publication-title: Int. J. Comput. Vis. – start-page: 4312 year: 2021 end-page: 4321 ident: b30 article-title: Recent advances in adversarial training for adversarial robustness publication-title: International Joint Conference on Artificial Intelligence (IJCAI) – year: 2017 ident: b35 article-title: Detecting adversarial samples from artifacts – start-page: 1 year: 2018 end-page: 24 ident: b25 article-title: The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure publication-title: AAAI Conference on Artificial Intelligence (AAAI) – start-page: 410 year: 2016 end-page: 417 ident: b13 article-title: Adversarial diversity and hard positive generation publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – volume: 8 start-page: 33855 year: 2020 end-page: 33869 ident: b27 article-title: Probabilistic analysis of targeted attacks using transform-domain adversarial examples publication-title: IEEE Access – volume: 54 start-page: 1 year: 2022 end-page: 37 ident: b4 article-title: Deep learning for medical anomaly detection - a survey publication-title: ACM Comput. Surv. – volume: 23 start-page: 828 year: 2019 end-page: 841 ident: b44 article-title: One-pixel attack for fooling deep neural networks publication-title: IEEE Trans. Evol. Comput. – volume: 470 start-page: 257 year: 2022 end-page: 268 ident: b14 article-title: FADER: Fast adversarial example rejection publication-title: Neurocomputing – year: 2018 ident: b23 article-title: Adversarial examples from computational constraints – start-page: 1 year: 2019 end-page: 15 ident: b36 article-title: NIC: detecting adversarial samples with neural network invariant checking publication-title: Network and Distributed Systems Security Symposium (NDSS) – volume: 307 start-page: 195 year: 2018 end-page: 204 ident: b17 article-title: Understanding adversarial training: Increasing local stability of supervised models through robust optimization publication-title: Neurocomputing – volume: 54 start-page: 1 year: 2022 end-page: 34 ident: b3 article-title: A survey on deep learning for human activity recognition publication-title: ACM Comput. Surv. – start-page: 14453 year: 2020 end-page: 14462 ident: b22 article-title: Detecting adversarial samples using influence functions and nearest neighbors publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – volume: 35 start-page: 1 year: 2021 end-page: 18 ident: b47 article-title: A novel steganography algorithm based on quantization table modification and image scrambling in DCT domain publication-title: Int. J. Pattern Recognit. Artif. Intell. – start-page: 2574 year: 2016 end-page: 2582 ident: b10 article-title: Deepfool: A simple and accurate method to fool deep neural networks publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – year: 2017 ident: b48 article-title: Adversarial examples are not easily detected: bypassing ten detection methods – year: 2014 ident: b8 article-title: Explaining and harnessing adversarial examples – start-page: 399 year: 2018 end-page: 414 ident: b31 article-title: Sok: towards the science of security and privacy in machine learning publication-title: IEEE European Symposium on Security and Privacy (EuroS & P) – start-page: 1 year: 2019 end-page: 17 ident: b24 article-title: Are adversarial examples inevitable? publication-title: International Conference on Learning Representations (ICLR) – year: 2019 ident: b41 article-title: Natural and adversarial error detection using invariance to image transformations – year: 2009 ident: b45 article-title: Learning multiple layers of features from tiny images – start-page: 2818 year: 2016 end-page: 2826 ident: b1 article-title: Rethinking the inception architecture for computer vision publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – year: 2013 ident: b7 article-title: Intriguing properties of neural networks – start-page: 582 year: 2016 end-page: 597 ident: b32 article-title: Distillation as a defense to adversarial perturbations against deep neural networks publication-title: IEEE Symposium on Security and Privacy (S & P) – year: 2022 ident: b43 article-title: Towards understanding and harnessing the effect of image transformation in adversarial detection – volume: 49 start-page: 397 year: 2019 end-page: 410 ident: b5 article-title: Biometric systems interaction assessment: The state of the art publication-title: IEEE Trans. Hum.–Mach. Syst. – year: 2017 ident: b29 article-title: Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression – start-page: 4139 year: 2018 end-page: 4146 ident: b39 article-title: Detecting adversarial examples through image transformation publication-title: AAAI Conference on Artificial Intelligence (AAAI) – volume: 30 start-page: 2805 year: 2019 end-page: 2824 ident: b9 article-title: Adversarial examples: Attacks and defenses for deep learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. – start-page: 1 year: 2018 end-page: 15 ident: b38 article-title: Feature squeezing: detecting adversarial examples in deep neural networks publication-title: Network and Distributed Systems Security Symposium (NDSS) – year: 2020 ident: b40 article-title: VisionGuard: Runtime detection of adversarial inputs to perception systems – year: 2018 ident: b19 article-title: Ensemble adversarial training: Attacks and defenses publication-title: International Conference on Learning Representations (ICLR) – volume: 113 year: 2022 ident: b6 article-title: Jadeite: A novel image-behavior-based approach for java malware detection using deep learning publication-title: Comput. Secur. – year: 2018 ident: b28 article-title: A study of the effect of JPG compression on adversarial images – volume: 17 start-page: 4117 issue: 6 year: 2021 ident: 10.1016/j.asoc.2022.109027_b15 article-title: Adversarial attack against urban scene segmentation for autonomous vehicles publication-title: IEEE Trans. Ind. Inf. doi: 10.1109/TII.2020.3024643 – start-page: 4700 year: 2017 ident: 10.1016/j.asoc.2022.109027_b2 article-title: Densely connected convolutional networks – year: 2017 ident: 10.1016/j.asoc.2022.109027_b21 article-title: Towards evaluating the robustness of neural networks – start-page: 1 year: 2019 ident: 10.1016/j.asoc.2022.109027_b24 article-title: Are adversarial examples inevitable? – year: 2018 ident: 10.1016/j.asoc.2022.109027_b28 – volume: 307 start-page: 195 year: 2018 ident: 10.1016/j.asoc.2022.109027_b17 article-title: Understanding adversarial training: Increasing local stability of supervised models through robust optimization publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.04.027 – start-page: 399 year: 2018 ident: 10.1016/j.asoc.2022.109027_b31 article-title: Sok: towards the science of security and privacy in machine learning – volume: 54 start-page: 1 issue: 7 year: 2022 ident: 10.1016/j.asoc.2022.109027_b4 article-title: Deep learning for medical anomaly detection - a survey publication-title: ACM Comput. Surv. doi: 10.1145/3464423 – year: 2020 ident: 10.1016/j.asoc.2022.109027_b40 – volume: 54 start-page: 1 issue: 8 year: 2022 ident: 10.1016/j.asoc.2022.109027_b3 article-title: A survey on deep learning for human activity recognition publication-title: ACM Comput. Surv. doi: 10.1145/3472290 – start-page: 14453 year: 2020 ident: 10.1016/j.asoc.2022.109027_b33 article-title: Detecting adversarial samples using influence functions and nearest neighbors – volume: 35 start-page: 1 issue: 1 year: 2021 ident: 10.1016/j.asoc.2022.109027_b47 article-title: A novel steganography algorithm based on quantization table modification and image scrambling in DCT domain publication-title: Int. J. Pattern Recognit. Artif. Intell. doi: 10.1142/S021800142154001X – volume: 49 start-page: 397 issue: 5 year: 2019 ident: 10.1016/j.asoc.2022.109027_b5 article-title: Biometric systems interaction assessment: The state of the art publication-title: IEEE Trans. Hum.–Mach. Syst. doi: 10.1109/THMS.2019.2913672 – year: 2016 ident: 10.1016/j.asoc.2022.109027_b20 article-title: Distillation as a defense to adversarial perturbations against deep neural networks – start-page: 1 year: 2018 ident: 10.1016/j.asoc.2022.109027_b38 article-title: Feature squeezing: detecting adversarial examples in deep neural networks – year: 2013 ident: 10.1016/j.asoc.2022.109027_b7 – year: 2009 ident: 10.1016/j.asoc.2022.109027_b45 – year: 2014 ident: 10.1016/j.asoc.2022.109027_b8 – year: 2018 ident: 10.1016/j.asoc.2022.109027_b19 article-title: Ensemble adversarial training: Attacks and defenses – start-page: 1 year: 2019 ident: 10.1016/j.asoc.2022.109027_b36 article-title: NIC: detecting adversarial samples with neural network invariant checking – year: 2022 ident: 10.1016/j.asoc.2022.109027_b43 – year: 2017 ident: 10.1016/j.asoc.2022.109027_b48 – start-page: 1 year: 2020 ident: 10.1016/j.asoc.2022.109027_b37 article-title: DLA: dense-layer-analysis for adversarial example detection – start-page: 1 year: 2018 ident: 10.1016/j.asoc.2022.109027_b25 article-title: The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure – start-page: 4312 year: 2021 ident: 10.1016/j.asoc.2022.109027_b30 article-title: Recent advances in adversarial training for adversarial robustness – year: 2017 ident: 10.1016/j.asoc.2022.109027_b35 – year: 2021 ident: 10.1016/j.asoc.2022.109027_b42 article-title: Detecting adversarial examples by input transformations, defense perturbations, and voting publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 8 start-page: 33855 year: 2020 ident: 10.1016/j.asoc.2022.109027_b27 article-title: Probabilistic analysis of targeted attacks using transform-domain adversarial examples publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2974525 – year: 2017 ident: 10.1016/j.asoc.2022.109027_b18 – year: 2017 ident: 10.1016/j.asoc.2022.109027_b29 – start-page: 2818 year: 2016 ident: 10.1016/j.asoc.2022.109027_b1 article-title: Rethinking the inception architecture for computer vision – start-page: 2574 year: 2016 ident: 10.1016/j.asoc.2022.109027_b10 article-title: Deepfool: A simple and accurate method to fool deep neural networks – start-page: 14453 year: 2020 ident: 10.1016/j.asoc.2022.109027_b22 article-title: Detecting adversarial samples using influence functions and nearest neighbors – volume: 23 start-page: 828 issue: 5 year: 2019 ident: 10.1016/j.asoc.2022.109027_b44 article-title: One-pixel attack for fooling deep neural networks publication-title: IEEE Trans. Evol. Comput. doi: 10.1109/TEVC.2019.2890858 – start-page: 6639 year: 2020 ident: 10.1016/j.asoc.2022.109027_b34 article-title: ML-LOO: detecting adversarial examples with feature attribution – start-page: 410 year: 2016 ident: 10.1016/j.asoc.2022.109027_b13 article-title: Adversarial diversity and hard positive generation – volume: 115 start-page: 211 issue: 3 year: 2015 ident: 10.1016/j.asoc.2022.109027_b46 article-title: ImageNet large scale visual recognition challenge publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-015-0816-y – start-page: 86 year: 2017 ident: 10.1016/j.asoc.2022.109027_b11 article-title: Universal adversarial perturbations – year: 2019 ident: 10.1016/j.asoc.2022.109027_b26 – volume: 30 start-page: 2805 issue: 9 year: 2019 ident: 10.1016/j.asoc.2022.109027_b9 article-title: Adversarial examples: Attacks and defenses for deep learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2018.2886017 – volume: 470 start-page: 257 year: 2022 ident: 10.1016/j.asoc.2022.109027_b14 article-title: FADER: Fast adversarial example rejection publication-title: Neurocomputing doi: 10.1016/j.neucom.2021.10.082 – year: 2020 ident: 10.1016/j.asoc.2022.109027_b16 – volume: 113 year: 2022 ident: 10.1016/j.asoc.2022.109027_b6 article-title: Jadeite: A novel image-behavior-based approach for java malware detection using deep learning publication-title: Comput. Secur. doi: 10.1016/j.cose.2021.102547 – volume: 24 start-page: 230 year: 2022 ident: 10.1016/j.asoc.2022.109027_b12 article-title: SmsNet: A new deep convolutional neural network model for adversarial example detection publication-title: IEEE Trans. Multimedia doi: 10.1109/TMM.2021.3050057 – start-page: 4139 year: 2018 ident: 10.1016/j.asoc.2022.109027_b39 article-title: Detecting adversarial examples through image transformation – year: 2018 ident: 10.1016/j.asoc.2022.109027_b23 – start-page: 582 year: 2016 ident: 10.1016/j.asoc.2022.109027_b32 article-title: Distillation as a defense to adversarial perturbations against deep neural networks – year: 2019 ident: 10.1016/j.asoc.2022.109027_b41 |
SSID | ssj0016928 |
Score | 2.384085 |
Snippet | Deep neural networks (DNNs) have achieved state-of-the-art performance in numerous tasks involving complex analysis of raw data, such as self-driving systems... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 109027 |
SubjectTerms | Adversarial example Deep neural networks Discrete cosine transform Dominant features Recessive features |
Title | Feature-filter: Detecting adversarial examples by filtering out recessive features |
URI | https://dx.doi.org/10.1016/j.asoc.2022.109027 |
Volume | 124 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5KvXjxLdZH2YM32XazSXYTb6Uq9VWkWugtbJINVKQtbSp68bc7k02KgvTgKSTMhuTLZB7sNzOEnAeJjLnxM6YD6THPMTELXV8ziAySGDI58IJYjfzYl72hdzfyRzXSrWphkFZZ2n5r0wtrXV5pl2i2Z-Nx-xkyj8ALPSmQX-Uq7AnqeQq1vPW1onk4Mizmq6IwQ-mycMZyvDQgADmiEK2Cn6j-dk4_HM7NDtkqI0XasQ-zS2pmske2qykMtPwp98kAo7jl3LBsjDvfl_TK4MYAuCSqcdryQqOOUfOhsRHwgsaf1EqixHSZU3h_5MK-G5rZOy0OyPDm-qXbY-WkBJa4nOdMpI6ntIFsTwpuVBCakCdGCMV1DBkefI8kcLWfag12NlXKaGGwwi31Q-kr4bqHpD6ZTswRofBLpkkQJjyAzMw3WseO0BLsgNSuKzPeIE4FUZSUbcRxmsVbVPHFXiOENUJYIwtrg1ys1sxsE4210n6FfPRLFSKw8mvWHf9z3QnZxDPLwT0l9Xy-NGcQaeRxs1ClJtnodAcPT3i8ve_1vwFeqtQU |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LS8NAEF5ED3rxLdbnHvQka5NNskkED6KWah8HrdBb3CQTqEhb-lB78U_5B51pNqIgHoRek92w-TJ8M5P9ZoexoyBRsQVeJnSgXOHaEIvQ8bTAyCCJMZNDL0jVyI2mqj64t22vPcc-iloYklUa7s85fcrW5krZoFnudzrle8w8Ajd0lSR9leO7RllZg8kr5m3D85sr_MjHUlauW5dVYVoLiMSxrJGQqe36GjA9UtICPwghtBKQ0rd0jCkRvkASONpLtUZiSn0ftAQqCUu9UHm-pL-gyPsLLtIFtU04ff_SldgqnDZ0pdUJWp6p1MlFZRohx6RUytOpINL_3Rt-83CVVbZsQlN-kb_9GpuD7jpbKdo-cMMCG-yOwsbxAETWoa32M34FtBOBPpBrau881GTUHN40nTw85PGE5yNpRG884gg4iW9fgGf5k4ab7GEm-G2x-W6vC9uMIwekSRAmVoCpoAdax7bUColHacdRmVVidgFRlJhzy6l9xnNUCNSeIoI1IlijHNYSO_ma089P7fhztFcgH_2wvQjdyh_zdv4575AtVluNelS_adZ22RLdyQXAe2x-NBjDPoY5o_hgalacPc7ajj8Bp2kNyA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Feature-filter%3A+Detecting+adversarial+examples+by+filtering+out+recessive+features&rft.jtitle=Applied+soft+computing&rft.au=Liu%2C+Hui&rft.au=Zhao%2C+Bo&rft.au=Ji%2C+Minzhi&rft.au=Peng%2C+Yuefeng&rft.date=2022-07-01&rft.issn=1568-4946&rft.volume=124&rft.spage=109027&rft_id=info:doi/10.1016%2Fj.asoc.2022.109027&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_asoc_2022_109027 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1568-4946&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1568-4946&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1568-4946&client=summon |