Human Posture Recognition and Estimation Method Based on 3D Multiview Basketball Sports Dataset
In traditional 3D reconstruction methods, using a single view to predict the 3D structure of an object is a very difficult task. This research mainly discusses human pose recognition and estimation based on 3D multiview basketball sports dataset. The convolutional neural network framework used in th...
Saved in:
Published in | Complexity (New York, N.Y.) Vol. 2021; no. 1 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Hoboken
Hindawi
2021
Hindawi Limited Wiley |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In traditional 3D reconstruction methods, using a single view to predict the 3D structure of an object is a very difficult task. This research mainly discusses human pose recognition and estimation based on 3D multiview basketball sports dataset. The convolutional neural network framework used in this research is VGG11, and the basketball dataset Image Net is used for pretraining. This research uses some modules of the VGG11 network. For different feature fusion methods, different modules of the VGG11 network are used as the feature extraction network. In order to be efficient in computing and processing, the multilayer perceptron in the network model is implemented by a one-dimensional convolutional network. The input is a randomly sampled point set, and after a layer of perceptron, it outputs a feature set of n × 16. Then, the feature set is sent to two network branches, one is to continue to use the perceptron method to generate the feature set of n × 1024, and the other network is used to extract the local features of points. After the RGB basketball sports picture passes through the semantic segmentation network, a picture containing the target object is obtained, and the picture is input to the constructed feature fusion network model. After feature extraction is performed on the RGB image and the depth image, respectively, the RGB feature, the local feature of the point cloud, and the global feature are spliced and fused to form a feature vector of N × 1152. There are three branches for this vector network, which, respectively, predict the object position, rotation, and confidence. Among them, the feature dimensionality reduction is realized by one-dimensional convolution, and the activation function is the ReLU function. After removing the feature mapping module, the accuracy of VC-CNN_v1 dropped by 0.33% and the accuracy of VC-CNN_v2 dropped by 0.55%. It can be seen from the research results that the addition of the feature mapping module improves the recognition effect of the network to a certain extent |
---|---|
ISSN: | 1076-2787 1099-0526 |
DOI: | 10.1155/2021/6697697 |