CTI-Unet: Hybrid Local Features and Global Representations Efficiently
Recent advancements in medical image segmentation have demonstrated superior performance by combining Transformer and U-Net due to the Transformer's exceptional ability to capture long-range semantic dependencies. However, existing approaches mostly replace or concatenate the Convolutional Neur...
Saved in:
Published in | 2023 IEEE International Conference on Image Processing (ICIP) pp. 735 - 739 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
08.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent advancements in medical image segmentation have demonstrated superior performance by combining Transformer and U-Net due to the Transformer's exceptional ability to capture long-range semantic dependencies. However, existing approaches mostly replace or concatenate the Convolutional Neural Networks (CNNs) and Transformers in series, which limits the potential of their combination. In this paper, we introduce a dual-branch feature encoder, CTI-UNet, that effectively fuses the global representations and local features of the CNN and Transformer branches at different scales through bidirectional feature interaction. Our proposed method outperforms existing approaches on multiple medical datasets, demonstrating state-of-the-art performance. The code for CTI-UNet is publicly available at https://github.com/huhaigen/CTI-UNet. |
---|---|
DOI: | 10.1109/ICIP49359.2023.10222235 |