Parallel Architecture of Fully Convolved Neural Network for Retinal Vessel Segmentation

Retinal blood vessel extraction is considered to be the indispensable action for the diagnostic purpose of many retinal diseases. In this work, a parallel fully convolved neural network–based architecture is proposed for the retinal blood vessel segmentation. Also, the network performance improvemen...

Full description

Saved in:
Bibliographic Details
Published inJournal of digital imaging Vol. 33; no. 1; pp. 168 - 180
Main Authors V, Sathananthavathi, G, Indumathi, A, Swetha Ranjani
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.02.2020
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Retinal blood vessel extraction is considered to be the indispensable action for the diagnostic purpose of many retinal diseases. In this work, a parallel fully convolved neural network–based architecture is proposed for the retinal blood vessel segmentation. Also, the network performance improvement is studied by applying different levels of preprocessed images. The proposed method is experimented on DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the Retina) which are the widely accepted public database for this research area. The proposed work attains high accuracy, sensitivity, and specificity of about 96.37%, 86.53%, and 98.18% respectively. Data independence is also proved by testing abnormal STARE images with DRIVE trained model. The proposed architecture shows better result in the vessel extraction irrespective of vessel thickness. The obtained results show that the proposed work outperforms most of the existing segmentation methodologies, and it can be implemented as the real time application tool since the entire work is carried out on CPU. The proposed work is executed with low-cost computation; at the same time, it takes less than 2 s per image for vessel extraction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0897-1889
1618-727X
DOI:10.1007/s10278-019-00250-y