ResNet interpretation methods applied to the classification of foliar diseases in sunflower

Remarkable progress in the identification of foliar diseases based on plant images has been achieved through the introduction of convolutional neural networks (CNN), but most studies are based on images taken in laboratory conditions, on a single-color background and in controlled light conditions....

Full description

Saved in:
Bibliographic Details
Published inJournal of agriculture and food research Vol. 9; p. 100323
Main Authors Dawod, Rodica Gabriela, Dobre, Ciprian
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2022
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Remarkable progress in the identification of foliar diseases based on plant images has been achieved through the introduction of convolutional neural networks (CNN), but most studies are based on images taken in laboratory conditions, on a single-color background and in controlled light conditions. In real life, farmers take pictures with many natural disturbances, such as shadows, various background conditions, the presence of multiple plants, pollen, insects. Thus, the algorithms face limitations because of the diversity of conditions. Accuracy can drop from 98% to 99% achieved when test is done on images of the same nature with training dataset, to less than 70% when test is done on field images. In this context, it is necessary to understand what elements in the image contributed to the classification and why they led to a wrong classification. In this study, we begin by classifying diseases with the help of the ResNet convolutional network, using a dataset composed of field images representing four foliar diseases of Sunflower (Helianthus). Next, we applied CNN visualization techniques, to explain the prediction for misclassified images. The interpretation methods have highlighted situations in which the classification was done erroneously, using background elements. We also noticed the case of images where we have several leaves and where the prediction was made using random areas of multiple leaves. The advanced stage of the disease, in which the lesions merge, seems to be another misclassification factor. The lack of a set of images diversified enough to contain all the forms of manifestation of a disease, leads also to wrong classifications either due to missing representation of symptoms, or due to images in which the visual symptoms are similar for several diseases. The conclusion of this study based on visualization results, is that a classification done using segmented lesions can lead to an improved accuracy, because many factors leading to a wrong classification, are eliminated in this way. [Display omitted] •Visualize how a CNN used for disease identification is making the classification.•What part of an image is used for classification and why that part was selected?•ResNet152 interpretability and building trust in Artificial Intelligence.•Techniques used to explain and visualize how a network understands an input image.
ISSN:2666-1543
2666-1543
DOI:10.1016/j.jafr.2022.100323