Improved Brain Tumor Segmentation Using UNet-LSTM Architecture

Brain Tumor is always known for its deadliest behavior and people’s less survival probability against it. It is a complex and life- changing medical condition where the abnormal or dead brain cell grows in and around the brain tissues. In the United States, nearly 87,000 cases are diagnosed each yea...

Full description

Saved in:
Bibliographic Details
Published inSN computer science Vol. 5; no. 5; p. 496
Main Authors Sowrirajan, Saran Raj, Karumanan Srinivasan, Logeshwaran, Kalluri, Anisha Devi, Subburam, Ravi Kumar
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 01.06.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Brain Tumor is always known for its deadliest behavior and people’s less survival probability against it. It is a complex and life- changing medical condition where the abnormal or dead brain cell grows in and around the brain tissues. In the United States, nearly 87,000 cases are diagnosed each year increasing year by year. Brain tumor is mainly classified into two categories based on their impact on the person: Benign (non-cancerous) and Malignant (cancerous). We only focus on the cancerous tumor as it requires early detection for diagnosis. Brain Tumors are diagnosed based on the four different grades from low grade (1,2) and high grade (3,4). It is one of the hectic tasks for the medical professionals to analyze accurately. We worked on this to make the error- prone segmentation by creating the mask in the tumor region. We used MRI images as our dataset (BraTs2020) to train and segment the tumor successfully. Classes taken for segmentation are Eduma, Background, Enhancing, and Non-enhancing. Previously many methodologies have been used for segmented but we came up with integrating Long Short Term Memory (LSTM) along with U-Net architecture. U-Net is a doubled architecture of the Convolutional Neural Network model with contraction and expansive path. The accuracy, loss, and precision obtained from our work are 0.9916, 0.0240, and 0.9930 respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2661-8907
2662-995X
2661-8907
DOI:10.1007/s42979-024-02799-0