Self-Supervised Learning for Seismic Image Segmentation From Few-Labeled Samples

Current deep learning methods for interpreting seismic images require large amounts of labeled data, and due to strategic and economic interests, these data are not plenty available. In this scenario, seismic interpretation can benefit from self-supervised learning (SSL) by relying on prior training...

Full description

Saved in:
Bibliographic Details
Published inIEEE geoscience and remote sensing letters Vol. 19; pp. 1 - 5
Main Authors Monteiro, Bruno A. A., Oliveira, Hugo, Santos, Jefersson A. dos
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Current deep learning methods for interpreting seismic images require large amounts of labeled data, and due to strategic and economic interests, these data are not plenty available. In this scenario, seismic interpretation can benefit from self-supervised learning (SSL) by relying on prior training without manually annotated labels within the target data domain and subsequent fine-tuning with few shots. To demonstrate the potential of such an approach, we conducted experiments with three classic context-based pretext tasks: rotation, jigsaw, and frame order prediction. Our results for 1, 5, 10, and 20 shots showed significant improvement for mean Intersection-over-Union (mIoU) measurements for semantic segmentation in most scenarios, outperforming the baseline method in 38% in the one-shot scenario for the F3 Netherlands Dataset and 16.4% in the New Zealand Parihaka dataset, and this gap grows even higher after performing ensemble modeling. These experiments suggest that applying SSL methods can also bring great benefits in seismic interpretation when few labeled data are available.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2022.3193567