Multi-contrast High Quality MR Image Super-Resolution with Dual Domain Knowledge Fusion
Multi-contrast high quality high-resolution (HR) Magnetic Resonance (MR) images enrich available information for diagnosis and analysis. Deep convolutional neural network methods have shown promising ability for MR image super-resolution (SR) given low-resolution (LR) MR images. Methods taking HR im...
Saved in:
Published in | 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) pp. 2127 - 2134 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
06.12.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multi-contrast high quality high-resolution (HR) Magnetic Resonance (MR) images enrich available information for diagnosis and analysis. Deep convolutional neural network methods have shown promising ability for MR image super-resolution (SR) given low-resolution (LR) MR images. Methods taking HR images as references (Ref) have made progress to enhance the effect of MR images SR. However, existing multi-contrast MR image SR approaches are based on contrasting-expanding backbones, which lose high frequency information of Ref image during downsampling. They also failed to transfer textures of Ref image into target domain. In this paper, we propose Edge Mask Transformer UNet (EMFU) for accelerating MR images SR. We propose Edge Mask Transformer (EMF) to generate global details and texture representation of target domain. Dual domain fusion module in UNet aggregates semantic information of the representation and LR image of target domain. Specifically, we extract and encode edge masks to guide the attention in EMF by re-distributing the embedding tensors, so that the network allocates more attention to image edge area. We also design a dual domain fusion module with self-attention and cross-attention to deeply fuse semantic information of multiple protocols for MRI. Extensive experiments show the effectiveness of our proposed EMFU, which surpasses state-of-the-art methods on benchmarks quantitatively and visually. Codes will be released to the community. |
---|---|
DOI: | 10.1109/BIBM55620.2022.9995219 |