Toward cell nuclei precision between OCT and H&E images translation using signal-to-noise ratio cycle-consistency

Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-...

Full description

Saved in:
Bibliographic Details
Published inComputer methods and programs in biomedicine Vol. 242; p. 107824
Main Authors Liu, Chih-Hao, Fu, Li-Wei, Chen, Homer H., Huang, Sheng-Lung
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-vivo Hematoxylin and eosin (H&E) stained images without the need for image stacking, registration, post-processing, and annotation. The model can generate high-quality and highly accurate virtual medical images, and is robust and bidirectional. Our framework introduces random noise to (1) blur redundant features, (2) defend against self-adversarial attacks, (3) stabilize inverse conversion, and (4) mitigate the impact of OCT speckles. We also demonstrate that our model can be pre-trained and then fine-tuned using images from different OCT systems in just a few epochs. Qualitative and quantitative comparisons with traditional image-to-image translation models show the robustness of our proposed signal-to-noise ratio (SNR) cycle-consistency method. •We present an unpaired translation model between in-vivo OCT and ex-vivo H&E images without image alignment and annotation assistance.•We introduce SNRGAN, a framework utilizing random noise to enhance translation quality, outperforming existing models.•Our proposed pre-trained model can be fine-tuned for different images from OCT systems.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2023.107824