Guided Patch-Grouping Wavelet Transformer with Spatial Congruence for Ultra-High Resolution Segmentation
Most existing ultra-high resolution (UHR) segmentation methods always struggle in the dilemma of balancing memory cost and local characterization accuracy, which are both taken into account in our proposed Guided Patch-Grouping Wavelet Transformer (GPWFormer) that achieves impressive performances. I...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.07.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Most existing ultra-high resolution (UHR) segmentation methods always
struggle in the dilemma of balancing memory cost and local characterization
accuracy, which are both taken into account in our proposed Guided
Patch-Grouping Wavelet Transformer (GPWFormer) that achieves impressive
performances. In this work, GPWFormer is a Transformer ($\mathcal{T}$)-CNN
($\mathcal{C}$) mutual leaning framework, where $\mathcal{T}$ takes the whole
UHR image as input and harvests both local details and fine-grained long-range
contextual dependencies, while $\mathcal{C}$ takes downsampled image as input
for learning the category-wise deep context. For the sake of high inference
speed and low computation complexity, $\mathcal{T}$ partitions the original UHR
image into patches and groups them dynamically, then learns the low-level local
details with the lightweight multi-head Wavelet Transformer (WFormer) network.
Meanwhile, the fine-grained long-range contextual dependencies are also
captured during this process, since patches that are far away in the spatial
domain can also be assigned to the same group. In addition, masks produced by
$\mathcal{C}$ are utilized to guide the patch grouping process, providing a
heuristics decision. Moreover, the congruence constraints between the two
branches are also exploited to maintain the spatial consistency among the
patches. Overall, we stack the multi-stage process in a pyramid way.
Experiments show that GPWFormer outperforms the existing methods with
significant improvements on five benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2307.00711 |