One-stage Shape Instantiation from a Single 2D Image to 3D Point Cloud
Shape instantiation which predicts the 3D shape of a dynamic target from one or more 2D images is important for real-time intra-operative navigation. Previously, a general shape instantiation framework was proposed with manual image segmentation to generate a 2D Statistical Shape Model (SSM) and wit...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.07.2019
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1907.10763 |
Cover
Loading…
Summary: | Shape instantiation which predicts the 3D shape of a dynamic target from one
or more 2D images is important for real-time intra-operative navigation.
Previously, a general shape instantiation framework was proposed with manual
image segmentation to generate a 2D Statistical Shape Model (SSM) and with
Kernel Partial Least Square Regression (KPLSR) to learn the relationship
between the 2D and 3D SSM for 3D shape prediction. In this paper, the two-stage
shape instantiation is improved to be one-stage. PointOutNet with 19
convolutional layers and three fully-connected layers is used as the network
structure and Chamfer distance is used as the loss function to predict the 3D
target point cloud from a single 2D image. With the proposed one-stage shape
instantiation algorithm, a spontaneous image-to-point cloud training and
inference can be achieved. A dataset from 27 Right Ventricle (RV) subjects,
indicating 609 experiments, were used to validate the proposed one-stage shape
instantiation algorithm. An average point cloud-to-point cloud (PC-to-PC) error
of 1.72mm has been achieved, which is comparable to the PLSR-based (1.42mm) and
KPLSR-based (1.31mm) two-stage shape instantiation algorithm. |
---|---|
DOI: | 10.48550/arxiv.1907.10763 |