Self-Supervised Learning Based on Spatial Awareness for Medical Image Analysis

Medical image analysis is one of the research fields that had huge benefits from deep learning in recent years. To earn a good performance, the learning model requires large scale data with full annotation. However, it is a big burden to collect a sufficient number of labeled data for the training....

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 8; pp. 162973 - 162981
Main Authors Nguyen, Xuan-Bac, Lee, Guee Sang, Kim, Soo Hyung, Yang, Hyung Jeong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Medical image analysis is one of the research fields that had huge benefits from deep learning in recent years. To earn a good performance, the learning model requires large scale data with full annotation. However, it is a big burden to collect a sufficient number of labeled data for the training. Since there are more unlabeled data than labeled ones in most of medical applications, self-supervised learning has been utilized to improve the performance. However, most of current methods for self-supervised learning try to understand only semantic features of the data, but have not fully utilized properties inherent in medical images. Specifically, in CT or MR images, the spatial or structural information contained in the dataset has not been fully considered. In this paper, we propose a novel method for self-supervised learning in medical image analysis that can exploit both semantic and spatial features at the same time. The proposed method is experimented in the problems of organ segmentation, intracranial hemorrhage detection and the results show the effectiveness of the method.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3021469