DeepShip: An underwater acoustic benchmark dataset and a separable convolution based autoencoder for classification

•Proposed dataset consists of real underwater recordings of 47 h 4 min of 265 ships.•Recordings are from throughout the year with different sea states and noise levels.•Study of 6 T-F features by 8 machine learning and deep learning methods on dataset.•Proposed a separable convolutional autoencoder...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 183; p. 115270
Main Authors Irfan, Muhammad, Jiangbin, Zheng, Ali, Shahid, Iqbal, Muhammad, Masood, Zafar, Hamid, Umar
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 30.11.2021
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Proposed dataset consists of real underwater recordings of 47 h 4 min of 265 ships.•Recordings are from throughout the year with different sea states and noise levels.•Study of 6 T-F features by 8 machine learning and deep learning methods on dataset.•Proposed a separable convolutional autoencoder for better classification accuracy. Underwater acoustic classification is a challenging problem because of presence of high background noise and complex sound propagation patterns in the sea environment. Various algorithms proposed in last few years used own privately collected datasets for design and validation. Such data is not publicly available. To conduct research in this field, there is a dire need of publicly available dataset. To bridge this gap, we construct and present an underwater acoustic dataset, named DeepShip, which consists of 47 h and 4 min of real world underwater recordings of 265 different ships belong to four classes. The proposed dataset includes recording from throughout the year with different sea states and noise levels. The presented dataset will not only help to evaluate the performance of existing algorithms but it shall also benefit the research community in future. Using the proposed dataset, we also conducted a comprehensive study of various machine learning and deep learning algorithms on six time–frequency based extracted features. In addition, we propose a novel separable convolution based autoencoder network for better classification accuracy. Experiments results, which are compared based on classification accuracy, precision, recall, f1-score, and analyzed by using paired sampled statistical t-test, show that the proposed network achieves classification accuracy of 77.53% using CQT feature, which is better than as achieved by other methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.115270