RapidVol: Rapid Reconstruction of 3D Ultrasound Volumes from Sensorless 2D Scans
Two-dimensional (2D) freehand ultrasonography is one of the most commonly used medical imaging modalities, particularly in obstetrics and gynaecology. However, it only captures 2D cross-sectional views of inherently 3D anatomies, losing valuable contextual information. As an alternative to requiring...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Two-dimensional (2D) freehand ultrasonography is one of the most commonly
used medical imaging modalities, particularly in obstetrics and gynaecology.
However, it only captures 2D cross-sectional views of inherently 3D anatomies,
losing valuable contextual information. As an alternative to requiring costly
and complex 3D ultrasound scanners, 3D volumes can be constructed from 2D scans
using machine learning. However this usually requires long computational time.
Here, we propose RapidVol: a neural representation framework to speed up
slice-to-volume ultrasound reconstruction. We use tensor-rank decomposition, to
decompose the typical 3D volume into sets of tri-planes, and store those
instead, as well as a small neural network. A set of 2D ultrasound scans, with
their ground truth (or estimated) 3D position and orientation (pose) is all
that is required to form a complete 3D reconstruction. Reconstructions are
formed from real fetal brain scans, and then evaluated by requesting novel
cross-sectional views. When compared to prior approaches based on fully
implicit representation (e.g. neural radiance fields), our method is over 3x
quicker, 46% more accurate, and if given inaccurate poses is more robust.
Further speed-up is also possible by reconstructing from a structural prior
rather than from scratch. |
---|---|
DOI: | 10.48550/arxiv.2404.10766 |