Semantic Segmentation using Vision Transformers: A survey
Semantic segmentation has a broad range of applications in a variety of domains including land coverage analysis, autonomous driving, and medical image analysis. Convolutional neural networks (CNN) and Vision Transformers (ViTs) provide the architecture models for semantic segmentation. Even though...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Semantic segmentation has a broad range of applications in a variety of
domains including land coverage analysis, autonomous driving, and medical image
analysis. Convolutional neural networks (CNN) and Vision Transformers (ViTs)
provide the architecture models for semantic segmentation. Even though ViTs
have proven success in image classification, they cannot be directly applied to
dense prediction tasks such as image segmentation and object detection since
ViT is not a general purpose backbone due to its patch partitioning scheme. In
this survey, we discuss some of the different ViT architectures that can be
used for semantic segmentation and how their evolution managed the above-stated
challenge. The rise of ViT and its performance with a high success rate
motivated the community to slowly replace the traditional convolutional neural
networks in various computer vision tasks. This survey aims to review and
compare the performances of ViT architectures designed for semantic
segmentation using benchmarking datasets. This will be worthwhile for the
community to yield knowledge regarding the implementations carried out in
semantic segmentation and to discover more efficient methodologies using ViTs. |
---|---|
DOI: | 10.48550/arxiv.2305.03273 |