Deep Learning-Based Intrusion Detection With Adversaries

Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 6; pp. 38367 - 38384
Main Author Wang, Zheng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.
AbstractList Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raise some concerns in applying deep neural networks in security-critical areas such as Intrusion Detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning based Intrusion Detection on the NSL-KDD dataset. Based on the implementation of deep neural networks using TensorFlow, we examine the vulnerabilities of neural networks under attacks on the IDS. To gain insights into the nature of Intrusion Detection and its attacks, we also explore the roles of individual features in generating adversarial examples.
Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.
Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raise some concerns in applying deep neural networks in security-critical areas such as Intrusion Detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning based Intrusion Detection on the NSL-KDD dataset. Based on the implementation of deep neural networks using TensorFlow, we examine the vulnerabilities of neural networks under attacks on the IDS. To gain insights into the nature of Intrusion Detection and its attacks, we also explore the roles of individual features in generating adversarial examples.Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raise some concerns in applying deep neural networks in security-critical areas such as Intrusion Detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning based Intrusion Detection on the NSL-KDD dataset. Based on the implementation of deep neural networks using TensorFlow, we examine the vulnerabilities of neural networks under attacks on the IDS. To gain insights into the nature of Intrusion Detection and its attacks, we also explore the roles of individual features in generating adversarial examples.
Author Wang, Zheng
Author_xml – sequence: 1
  givenname: Zheng
  orcidid: 0000-0003-2744-9345
  surname: Wang
  fullname: Wang, Zheng
  email: zhengwang98@gmail.com
  organization: National Institute of Standards and Technology, Gaithersburg, MD, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38882674$$D View this record in MEDLINE/PubMed
BookMark eNqFUstuEzEUHaEiWkq_oBKKxIbNBL8fKxTSApEisSiIpXXHcyd1NJkJ9kwl_h6HCVHbDd746vqc4_s4r4uzru-wKK4pmVNK7IfFcnl7dzdnhJo5M1JIa18UF4wqW3LJ1dmj-Ly4SmlL8jE5JfWr4pwbY5jS4qIwN4j72RohdqHblJ8gYT1bdUMcU-i72Q0O6IdD9DMM97NF_YAxQQyY3hQvG2gTXh3vy-LH59vvy6_l-tuX1XKxLr1kZCi5oZVijZUADQB6WhleNYLLRlRMSVZRjtQT8AhEGKMFrwxh2lMlfS0t8MtiNenWPWzdPoYdxN-uh-D-Jvq4cRCH4Ft0FWFC1ZqoWtVCSwMeGk0Q6ppaNFpnrY-T1n6sdlh7zH1C-0T06UsX7t2mf3CUUq2NJlnh_VEh9r9GTIPbheSxbaHDfkyOE2WpFoqoDH33DLrtx9jlWTkmpLSMGSIy6u3jkk61_NtQBvAJ4GOfUsTmBKHEHazgJiu4gxXc0QqZZZ-xfBjgsMjcV2j_w72euAERT78ZQfIILf8DEwfAQw
CODEN IAECCG
CitedBy_id crossref_primary_10_1007_s10207_022_00634_2
crossref_primary_10_1007_s00521_023_08511_2
crossref_primary_10_1109_JIOT_2020_3019225
crossref_primary_10_1016_j_jnca_2024_104102
crossref_primary_10_3390_s19112528
crossref_primary_10_1109_JSAC_2021_3087242
crossref_primary_10_1109_COMST_2020_2975048
crossref_primary_10_1109_ACCESS_2020_2974752
crossref_primary_10_1109_ACCESS_2022_3204171
crossref_primary_10_3390_app13158830
crossref_primary_10_1016_j_neucom_2022_06_002
crossref_primary_10_3390_jcp2010010
crossref_primary_10_1007_s10207_024_00903_2
crossref_primary_10_1109_COMST_2021_3136132
crossref_primary_10_1016_j_cose_2023_103176
crossref_primary_10_32604_cmc_2022_025262
crossref_primary_10_3390_app13053089
crossref_primary_10_1007_s10207_024_00934_9
crossref_primary_10_3390_app8091535
crossref_primary_10_1007_s12243_021_00854_y
crossref_primary_10_1016_j_asoc_2023_110173
crossref_primary_10_1016_j_cose_2022_102783
crossref_primary_10_1007_s13369_023_08092_1
crossref_primary_10_1016_j_cose_2021_102367
crossref_primary_10_1109_TNET_2021_3137084
crossref_primary_10_3390_s20061790
crossref_primary_10_1109_ACCESS_2019_2927376
crossref_primary_10_32604_cmc_2022_029858
crossref_primary_10_1007_s10586_023_04037_3
crossref_primary_10_1109_ACCESS_2022_3221400
crossref_primary_10_3390_app9020238
crossref_primary_10_1016_j_cose_2023_103483
crossref_primary_10_1109_COMST_2022_3233793
crossref_primary_10_1007_s10922_021_09621_9
crossref_primary_10_1007_s10207_019_00482_7
crossref_primary_10_1007_s12243_024_01046_0
crossref_primary_10_1007_s00521_020_04708_x
crossref_primary_10_1016_j_cose_2023_103644
crossref_primary_10_1016_j_measurement_2019_107450
crossref_primary_10_3390_app11157050
crossref_primary_10_1109_ACCESS_2019_2923640
crossref_primary_10_1007_s42452_024_06044_4
crossref_primary_10_1109_ACCESS_2020_3010274
crossref_primary_10_1007_s11227_024_06392_3
crossref_primary_10_1016_j_cose_2022_102922
crossref_primary_10_1109_ACCESS_2020_2969276
crossref_primary_10_3390_info15030154
crossref_primary_10_3390_jcp1020014
crossref_primary_10_1093_comjnl_bxad014
crossref_primary_10_1109_ACCESS_2021_3097247
crossref_primary_10_1007_s42979_024_02911_4
crossref_primary_10_1016_j_comnet_2024_110255
crossref_primary_10_1016_j_iot_2021_100462
crossref_primary_10_1007_s11227_025_06980_x
crossref_primary_10_3390_fi15020062
crossref_primary_10_1109_ACCESS_2022_3145002
crossref_primary_10_1145_3453158
crossref_primary_10_1007_s11277_023_10722_8
crossref_primary_10_1007_s10009_020_00594_9
crossref_primary_10_1016_j_comnet_2022_109073
crossref_primary_10_3390_electronics13245030
crossref_primary_10_1109_ACCESS_2022_3203568
crossref_primary_10_1145_3469659
crossref_primary_10_1109_ACCESS_2020_2983953
crossref_primary_10_1007_s10489_024_05311_6
crossref_primary_10_1007_s11277_022_09776_x
crossref_primary_10_1016_j_jpdc_2025_105039
crossref_primary_10_1016_j_jisa_2023_103608
crossref_primary_10_1186_s40537_020_00379_6
crossref_primary_10_3390_app10175811
crossref_primary_10_1109_TII_2024_3435532
crossref_primary_10_1371_journal_pone_0283725
crossref_primary_10_3390_s21041140
crossref_primary_10_1007_s00530_021_00771_z
crossref_primary_10_1002_ett_4150
crossref_primary_10_3390_app8122663
crossref_primary_10_1016_j_eswa_2020_113577
crossref_primary_10_1016_j_future_2022_02_019
crossref_primary_10_1111_exsy_13066
crossref_primary_10_1016_j_neucom_2021_12_015
crossref_primary_10_1007_s11235_024_01200_y
crossref_primary_10_1016_j_jnca_2020_102767
crossref_primary_10_1016_j_jisa_2021_102923
crossref_primary_10_1016_j_future_2020_01_055
crossref_primary_10_1016_j_jisa_2021_102804
crossref_primary_10_1145_3712307
crossref_primary_10_3233_JIFS_233557
crossref_primary_10_4018_IJISP_356893
crossref_primary_10_1016_j_swevo_2024_101702
crossref_primary_10_32604_cmc_2023_039752
crossref_primary_10_1080_1206212X_2021_1885150
crossref_primary_10_3390_computers9030058
crossref_primary_10_1007_s11227_024_06021_z
crossref_primary_10_1002_itl2_640
crossref_primary_10_3390_electronics14010189
crossref_primary_10_3390_app12136451
crossref_primary_10_1016_j_comcom_2023_09_030
crossref_primary_10_3390_electronics10151854
crossref_primary_10_1002_cpe_6918
crossref_primary_10_3390_su11143974
crossref_primary_10_1109_ACCESS_2019_2958068
crossref_primary_10_1016_j_compag_2022_106729
crossref_primary_10_1109_TDSC_2023_3247585
crossref_primary_10_3390_electronics12112427
crossref_primary_10_1016_j_heliyon_2023_e13520
crossref_primary_10_1016_j_cose_2024_103853
crossref_primary_10_1007_s10922_024_09871_3
crossref_primary_10_1080_01969722_2022_2088450
crossref_primary_10_1109_ACCESS_2020_3008433
crossref_primary_10_4018_IJISP_354884
crossref_primary_10_1016_j_cose_2025_104322
crossref_primary_10_1109_TIFS_2022_3201377
crossref_primary_10_1038_s41598_024_64664_7
crossref_primary_10_3390_s23042198
crossref_primary_10_1145_3544746
crossref_primary_10_1007_s12243_019_00743_5
crossref_primary_10_1007_s12083_024_01751_6
crossref_primary_10_1007_s11042_020_08804_x
Cites_doi 10.1109/SP.2017.49
10.1109/ACCESS.2017.2762418
10.1109/CISDA.2009.5356528
10.1109/TETCI.2017.2772792
10.1162/neco.1989.1.4.541
10.1109/CVPR.2016.282
10.1109/WINCOM.2016.7777224
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
7X8
5PM
DOA
DOI 10.1109/ACCESS.2018.2854599
DatabaseName IEEE Xplore (IEEE)
IEEE Xplore Open Access (Activated by CARLI)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
METADEX
Technology Research Database
Materials Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ (Directory of Open Access Journals)
DatabaseTitle CrossRef
PubMed
Materials Research Database
Engineered Materials Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
METADEX
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList PubMed
Materials Research Database
MEDLINE - Academic



Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2169-3536
EndPage 38384
ExternalDocumentID oai_doaj_org_article_b0246d706d6d4758acaf70eadd19e877
PMC11177870
38882674
10_1109_ACCESS_2018_2854599
8408779
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Intramural NIST DOC
  grantid: 9999-NIST
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
ABAZT
ABVLG
ACGFS
ADBBV
AGSQL
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
ESBDL
GROUPED_DOAJ
IPLJI
JAVBF
KQ8
M43
M~E
O9-
OCL
OK1
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
7X8
5PM
ID FETCH-LOGICAL-c520t-381b62f95aafaaec1b83bf435f4b2652b13e1c0acea0488743b8027c165cd59a3
IEDL.DBID RIE
ISSN 2169-3536
IngestDate Wed Aug 27 01:31:06 EDT 2025
Thu Aug 21 18:33:36 EDT 2025
Fri Jul 11 15:22:17 EDT 2025
Mon Jun 30 03:49:01 EDT 2025
Sat May 31 02:13:32 EDT 2025
Tue Jul 01 02:17:41 EDT 2025
Thu Apr 24 23:02:59 EDT 2025
Wed Aug 27 02:48:21 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords NSL-KDD dataset
intrusion detection
deep neural networks
adversarial examples
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/OAPA.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c520t-381b62f95aafaaec1b83bf435f4b2652b13e1c0acea0488743b8027c165cd59a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-2744-9345
OpenAccessLink https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/document/8408779
PMID 38882674
PQID 2455922804
PQPubID 4845423
PageCount 18
ParticipantIDs crossref_primary_10_1109_ACCESS_2018_2854599
proquest_journals_2455922804
crossref_citationtrail_10_1109_ACCESS_2018_2854599
proquest_miscellaneous_3069174606
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11177870
pubmed_primary_38882674
doaj_primary_oai_doaj_org_article_b0246d706d6d4758acaf70eadd19e877
ieee_primary_8408779
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2018-01-01
PublicationDateYYYYMMDD 2018-01-01
PublicationDate_xml – month: 01
  year: 2018
  text: 2018-01-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE access
PublicationTitleAbbrev Access
PublicationTitleAlternate IEEE Access
PublicationYear 2018
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref12
papernot (ref9) 2015
ref11
ref10
(ref15) 2018
javaid (ref3) 2015
ref2
ref1
(ref14) 2018
kurakin (ref8) 2016
ref4
(ref13) 2018
szegedy (ref6) 2013
ref5
goodfellow (ref7) 2014
References_xml – ident: ref11
  doi: 10.1109/SP.2017.49
– year: 2018
  ident: ref15
  publication-title: TENSOR
– year: 2014
  ident: ref7
  publication-title: Explaining and Harnessing Adversarial Examples
– ident: ref2
  doi: 10.1109/ACCESS.2017.2762418
– ident: ref12
  doi: 10.1109/CISDA.2009.5356528
– ident: ref1
  doi: 10.1109/TETCI.2017.2772792
– start-page: 21
  year: 2015
  ident: ref3
  article-title: A deep learning approach for network intrusion detection system
  publication-title: Proc 9th EAI Int Conf Bio-Inspired Inf Commun Technol (BICT)
– ident: ref5
  doi: 10.1162/neco.1989.1.4.541
– year: 2018
  ident: ref14
  publication-title: NSL-KDD Dataset
– year: 2018
  ident: ref13
  publication-title: Kdd Cup 99 Dataset
– ident: ref10
  doi: 10.1109/CVPR.2016.282
– year: 2016
  ident: ref8
  publication-title: Adversarial examples in the physical world
– ident: ref4
  doi: 10.1109/WINCOM.2016.7777224
– year: 2013
  ident: ref6
  publication-title: Intriguing properties of neural networks
– start-page: 372
  year: 2015
  ident: ref9
  article-title: The limitations of deep learning in adversarial settings
  publication-title: Proc IEEE Symp Privacy Secur
SSID ssj0000816957
Score 2.5384214
Snippet Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research...
Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
ieee
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 38367
SubjectTerms Algorithms
Artificial neural networks
classification algorithms
Cognitive tasks
data security
Deep learning
Feature extraction
Image classification
Intrusion detection
Intrusion detection systems
Machine learning
Measurement
Neural networks
Perturbation methods
Security management
Task analysis
SummonAdditionalLinks – databaseName: DOAJ (Directory of Open Access Journals)
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA7iSQ_i2_qigkerTZrnUVcXFfSk6C0kaaqCVGHX_-8kzdZdEb14zYtmMsnM12S-QejQNL4Et54UqjSkoKz2ha2oKDxzzjpSGRZTJ9zc8st7ev3IHqdSfYU3YR09cCe4EwtGhNei5DWvKTi3xplGlDD_GisvRYwjB5s3BabiGSwxV0wkmiFcqpPTwQBmFN5yyeMQNcgi2-uXKYqM_SnFyk_e5vdHk1NWaLiMlpL7mJ92n72C5ny7ihanSAXXkDz3_j1PvKlPxRmYqTq_akNwBaxBfu7H8fVVmz-8jJ_zmJB5FAHzOrofXtwNLouUH6FwjJTjAoyt5aRRzJjGGO-wlZVtwP9pqCWcEYsrj11pnDdhn4KvYCWgUIc5czVTptpA8-1b67dQDmIVnjdKgLtAnRK2DmGEjQiXcIRKnCEyEZV2iTw85LB41RFElEp38tVBvjrJN0NHfaf3jjvj9-ZnYQ36poH4OhaAOuikDvovdcjQWljBfhDAr1AKY-9OVlSnTTrShAKcCnRANEMHfTVsr3BnYlr_9jHSgKgA0FKAeRna7BSgH7uSAE-4gN5yRjVmZjBb0748RwpvHO7K4ajc_o8576CFIMfux9AumgeF8nvgKo3tftwVnw_TDbU
  priority: 102
  providerName: Directory of Open Access Journals
Title Deep Learning-Based Intrusion Detection With Adversaries
URI https://ieeexplore.ieee.org/document/8408779
https://www.ncbi.nlm.nih.gov/pubmed/38882674
https://www.proquest.com/docview/2455922804
https://www.proquest.com/docview/3069174606
https://pubmed.ncbi.nlm.nih.gov/PMC11177870
https://doaj.org/article/b0246d706d6d4758acaf70eadd19e877
Volume 6
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4Bp_bQF32kpSiVeiRL4tixfYSliFaip6Jys2xnUlCrLNJmL_x6xo43ZRGqeosSO4o9M_F89sw3AJ9thyW59azQpWUFFy0WruayQOG986y2IpZOOP_enF3wb5ficgsOplwYRIzBZzgLl_Esv134VdgqOyQwoqTU27BNwG3M1Zr2U0IBCS1kIhaqSn14NJ_TGEL0lpqFPEER-V3_Lj6Roz8VVXnMv3wYJnlv3Tl9DufrLx7DTX7PVoOb-dsHZI7_O6QX8Cw5oPnRqDEvYQv7V_D0Hi3hLqgTxJs8Ma_-Ko5poWvzr31IzyAp5ic4xPitPv95PVzlsaTzMkLu13Bx-uXH_KxIFRYKL1g5FLRcu4Z1WljbWYu-cqp2HXlQHXesEcxVNVa-tB5tsHTyNpwiHOurRvhWaFu_gZ1-0eM7yJX1EptOS3I4uNfStSERsZPhGI9xVWXA1lNvfKIfD1Uw_pgIQ0ptRnmZIC-T5JXBwdTpZmTf-Hfz4yDTqWmgzo43aMpNskTjyCtpWlk2bdNyQkvW206WZFBtpZFkkcFuENP0kiShDPbWGmKSmS8N4wTIAqEQz-DT9JgMNJy62B4Xq6UhTEaQmBNQzODtqFDTu2tFAKeR1FttqNrGCDaf9NdXkQS8Cqft9LN9__jnfoAnYWbGzaI92CEVwY_kPg1uP2477EfruQPWYxfC
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOvMojUCBIcCPbxInj-MCh3aXapY9TK3oztjOhFShbaXeF4LfwV_hvjB1v6FYVt0rcojysxP7imc-e-QbgjW4wJbeeJTLVLCl4jYnJC5Egt9ZYlmvuSyccHJbj4-LjCT9Zg199Lgwi-uAzHLhDv5dfT-3CLZVtERmphJAhhHIPf3wngjZ7PxnRaL5lbPfD0XCchBoCieUsnSdkkEzJGsm1brRGm5kqNw35CE1hWMmZyXLMbKotaodlsqemIqZms5LbmkudU7s34Cb5GZx12WH9Co4rWSG5CFJGWSq3todD6jUXL1YNXGYi94qyf82drwoQyrhc5dFeDsy8YOl278HvZR91AS5fB4u5Gdifl-Qj_9dOvA93g4sdb3f_xANYw_Yh3LkgvLgB1QjxPA7asl-SHTLldTxpXQIK4TQe4dxHqLXxp7P5aeyLVs_8osIjOL6WV38M6-20xacQV9oKLBspyKUqrBSmdqmWjXAblayosgjYcqiVDQLrrs7HN-WJVipVhw_l8KECPiJ41z903umL_Pv2HYeh_lYnDu5P0BCrMNcoQ35XWYu0rMu6ID6orW5ESlNGnUmksY9gw8GibyQgIoLNJSJVmMhmihHwpZNMKiJ43V-mKcjtK-kWp4uZItZJpL8gKhzBkw7Afdt5RRSuFPR0tQLtlS9YvdKenXqZ88zFE5A5eXb1676CW-Ojg321Pzncew63XS91S2ObsE5wwRfkLM7NS__PxvD5uvH9B5YqdL4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Learning+Based+Intrusion+Detection+With+Adversaries&rft.jtitle=IEEE+access&rft.au=Wang%2C+Zheng&rft.date=2018-01-01&rft.eissn=2169-3536&rft.volume=6&rft_id=info:doi/10.1109%2Faccess.2018.2854599&rft_id=info%3Apmid%2F38882674&rft.externalDocID=PMC11177870
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon