Varied channels region proposal and classification network for wildlife image classification under complex environment
A varied channels region proposal and classification network (VCRPCN) is developed based on a deep convolutional neural network (DCNN) and the characteristics of the animals appearing for automatic wildlife animal classification in camera trapped images, the architecture of the network is improved b...
Saved in:
Published in | IET image processing Vol. 14; no. 4; pp. 585 - 591 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
The Institution of Engineering and Technology
27.03.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A varied channels region proposal and classification network (VCRPCN) is developed based on a deep convolutional neural network (DCNN) and the characteristics of the animals appearing for automatic wildlife animal classification in camera trapped images, the architecture of the network is improved by feeding different channels into different components of the network to accomplish different aims, i.e. the animal images and their background images are employed in the region proposal component to extract region candidates for the animal's location, and the animal images combined with the region candidates are fed into the classification component to identify their categories. This novel architecture considers changes to the image due to the animals' appearances, and identifies potential animal regions in images and extracts their local features to describe and classify them. Five hundred low contrast animal images have been collected. All images have low contrast due to being acquired during the night. Cross-validation is employed to statistically measure the performance of the proposed algorithm. The experimental results demonstrate that in comparison with the well-known object detection network, faster R-CNN, the proposed VCRPCN achieved higher accuracy with the same dataset and training configuration with an average accuracy improvement of 21%. |
---|---|
ISSN: | 1751-9659 1751-9667 1751-9667 |
DOI: | 10.1049/iet-ipr.2019.1042 |