Task relevance driven adversarial learning for simultaneous detection, size grading, and quantification of hepatocellular carcinoma via integrating multi-modality MRI
•For the first time, our proposed TrdAL method provides a time-saving, reliable, and stable tool, which achieves simultaneous HCC detection, size grading, and multiindex quantification via integrating multi-modality MRI of in-phase, out-phase, T2FS, and DWI.•The proposed MaTrans encodes the position...
Saved in:
Published in | Medical image analysis Vol. 81; p. 102554 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •For the first time, our proposed TrdAL method provides a time-saving, reliable, and stable tool, which achieves simultaneous HCC detection, size grading, and multiindex quantification via integrating multi-modality MRI of in-phase, out-phase, T2FS, and DWI.•The proposed MaTrans encodes the position of multi-modality MRI to capture the relevance among multi-modality MRI, which refines the feature fusion and selection.•The innovative Trd-Rg-D captures the internal high-order relationships among multitask to refine the performance of multi-task simultaneously. Moreover, adding the radiomics feature as the prior knowledge into Trd-Rg-D enhances the detailed feature extraction.•The TrdAL provides a constraint strategy of tasks interaction, which enforces the higher-order consistency among multi-task labels to achieve the united adversarial learning among multi-task of detection, size grading, and multi-index quantification.
[Display omitted]
Hepatocellular Carcinoma (HCC) detection, size grading, and quantification (i.e. the center point coordinates, max-diameter, and area) by using multi-modality magnetic resonance imaging (MRI) are clinically significant tasks for HCC assessment and treatment. However, delivering the three tasks simultaneously is extremely challenging due to: (1) the lack of effective an mechanism to capture the relevance among multi-modality MRI information for multi-modality feature fusion and selection; (2) the lack of effective mechanism and constraint strategy to achieve mutual promotion of multi-task. In this paper, we proposed a task relevance driven adversarial learning framework (TrdAL) for simultaneous HCC detection, size grading, and multi-index quantification using multi-modality MRI (i.e. in-phase, out-phase, T2FS, and DWI). The TrdAL first obtains expressive feature of dimension reduction via using a CNN-based encoder. Secondly, the proposed modality-aware Transformer is utilized for multi-modality MRI features fusion and selection, which solves the challenge of multi-modality information diversity via capturing the relevance among multi-modality MRI. Then, the innovative task relevance driven and radiomics guided discriminator (Trd-Rg-D) is used for united adversarial learning. The Trd-Rg-D captures the internal high-order relationships to refine the performance of multi-task simultaneously. Moreover, adding the radiomics feature as the prior knowledge into Trd-Rg-D enhances the detailed feature extraction. Lastly, a novel task interaction loss function is used for constraining the TrdAL, which enforces the higher-order consistency among multi-task labels to enhance mutual promotion. The TrdAL is validated on a corresponding multi-modality MRI of 135 subjects. The experiments demonstrate that TrdAL achieves high accuracy of (1) HCC detection: specificity of 93.71%, sensitivity of 93.15%, accuracy of 93.33%, and IoU of 82.93%; (2) size grading: accuracy of large size, medium size, small size, tiny size, and healthy subject are 90.38%, 87.74%, 80.68%, 77.78%, and 96.87%; (3) multi-index quantification: the mean absolute error of center point, max-diameter, and area are 2.74mm, 3.17mm, and 144.51mm2. All of these results indicate that the proposed TrdAL provides an efficient, accurate, and reliable tool for HCC diagnosis in clinical. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1361-8415 1361-8423 1361-8423 |
DOI: | 10.1016/j.media.2022.102554 |