ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration

Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Dey, Neel, Schlemper, Jo, Seyed Sadegh Mohseni Salehi, Zhou, Bo, Gerig, Guido, Sofka, Michal
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 27.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.
ISSN:2331-8422