Combination of Color and Focus Segmentation for Medical Images with Low Depth-of-Field
Image segmentation plays an increasingly important role in image processing. It allows for various applications including the analysis of an image for automatic image understanding and the integration of complementary data. During vascular surgeries, the blood flow in the vessels has to be checked c...
Saved in:
Published in | Current directions in biomedical engineering Vol. 4; no. 1; pp. 345 - 349 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
De Gruyter
01.09.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Image segmentation plays an increasingly important role in image processing. It allows for various applications including the analysis of an image for automatic image understanding and the integration of complementary data. During vascular surgeries, the blood flow in the vessels has to be checked constantly, which could be facilitated by a segmentation of the affected vessels. The segmentation of medical images is still done manually, which depends on the surgeon’s experience and is time-consuming. As a result, there is a growing need for automatic image segmentation methods. We propose an unsupervised method to detect the regions of no interest (RONI) in intraoperative images with low depth-of-field (DOF). The proposed method is divided into three steps. First, a color segmentation using a clustering algorithm is performed. In a second step, we assume that the regions of interest (ROI) are in focus whereas the RONI are unfocused. This allows us to segment the image using an edge-based focus measure. Finally, we combine the focused edges with the color RONI to determine the final segmentation result. When tested on different intraoperative images of aneurysm clipping surgeries, the algorithm is able to segment most of the RONI not belonging to the pulsating vessel of interest. Surgical instruments like the metallic clips can also be excluded. Although the image data for the validation of the proposed method is limited to one intraoperative video, a proof of concept is demonstrated. |
---|---|
ISSN: | 2364-5504 2364-5504 |
DOI: | 10.1515/cdbme-2018-0083 |