Semantic Segmentation using Vision Transformers: A survey

Semantic segmentation has a broad range of applications in a variety of domains including land coverage analysis, autonomous driving, and medical image analysis. Convolutional neural networks (CNN) and Vision Transformers (ViTs) provide the architecture models for semantic segmentation. Even though...

Full description

Saved in:
Bibliographic Details
Main Authors Thisanke, Hans, Deshan, Chamli, Chamith, Kavindu, Seneviratne, Sachith, Vidanaarachchi, Rajith, Herath, Damayanthi
Format Journal Article
LanguageEnglish
Published 05.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semantic segmentation has a broad range of applications in a variety of domains including land coverage analysis, autonomous driving, and medical image analysis. Convolutional neural networks (CNN) and Vision Transformers (ViTs) provide the architecture models for semantic segmentation. Even though ViTs have proven success in image classification, they cannot be directly applied to dense prediction tasks such as image segmentation and object detection since ViT is not a general purpose backbone due to its patch partitioning scheme. In this survey, we discuss some of the different ViT architectures that can be used for semantic segmentation and how their evolution managed the above-stated challenge. The rise of ViT and its performance with a high success rate motivated the community to slowly replace the traditional convolutional neural networks in various computer vision tasks. This survey aims to review and compare the performances of ViT architectures designed for semantic segmentation using benchmarking datasets. This will be worthwhile for the community to yield knowledge regarding the implementations carried out in semantic segmentation and to discover more efficient methodologies using ViTs.
DOI:10.48550/arxiv.2305.03273