Decision Boundary to Improve the Sensitivity of Deep Neural Networks Models

In spite of their performance and relevance on various image classification fields, deep neural network classifiers encounter real difficulties face of minor information perturbations. In particular, the presence of contradictory examples causes a big weakness and insufficiency of deep learning mode...

Full description

Saved in:
Bibliographic Details
Published inBusiness Intelligence Vol. 449; pp. 50 - 60
Main Authors Ouriha, Mohamed, El Habouz, Youssef, El Mansouri, Omar
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2022
Springer International Publishing
SeriesLecture Notes in Business Information Processing
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In spite of their performance and relevance on various image classification fields, deep neural network classifiers encounter real difficulties face of minor information perturbations. In particular, the presence of contradictory examples causes a big weakness and insufficiency of deep learning models in many areas, such as illness recognition. The aim of our paper is to improve the robustness of deep neural network models to small input perturbations using standard training and adversial training to maximize the distance between predict instances and the boundary decision area. We shows the decision boundary performance of deep neural networks during model training, the minimum distance of the input images from the decision boundary area and how this distance develops during the deep neural network training. The results shows that the distance between the images and the decision boundary decreases during standard training. However, adversarial training increases this distance, which improve the performance of our model. Our work presents a new solution to the deep neural networks sensitivity problem. We found a very strong relationship between the efficiency of the deep neural networks model and the training phase. We can say that the efficiency is created during training, it is not predetermined by the initialization or architecture.
ISBN:9783031064579
3031064577
ISSN:1865-1348
1865-1356
DOI:10.1007/978-3-031-06458-6_4