Multifilters-Based Unsupervised Method for Retinal Blood Vessel Segmentation

Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the retinal vess...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 12; no. 13; p. 6393
Main Authors Muzammil, Nayab, Shah, Syed Ayaz Ali, Shahzad, Aamir, Khan, Muhammad Amir, Ghoniem, Rania M.
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the retinal vessel’s appearance. This work suggests an unsupervised approach for vessels segmentation out of retinal images. The proposed method includes multiple steps. Firstly, from the colored retinal image, green channel is extracted and preprocessed utilizing Contrast Limited Histogram Equalization as well as Fuzzy Histogram Based Equalization for contrast enhancement. To expel geometrical articles (macula, optic disk) and noise, top-hat morphological operations are used. On the resulted enhanced image, matched filter and Gabor wavelet filter are applied, and the outputs from both is added to extract vessels pixels. The resulting image with the now noticeable blood vessel is binarized using human visual system (HVS). A final image of segmented blood vessel is obtained by applying post-processing. The suggested method is assessed on two public datasets (DRIVE and STARE) and showed comparable results with regard to sensitivity, specificity and accuracy. The results we achieved with respect to sensitivity, specificity together with accuracy on DRIVE database are 0.7271, 0.9798 and 0.9573, and on STARE database these are 0.7164, 0.9760, and 0.9560, respectively, in less than 3.17 s on average per image.
ISSN:2076-3417
2076-3417
DOI:10.3390/app12136393