PoseNet - person-in-hangang/HanRiver GitHub Wiki
PoseNet
The PoseNet estimates the root-relative 3D pose
from a cropped human image.
The first part
is the backbone, which extracts a useful global feature from the cropped human image using ResNet.
The second
, the pose estimation part takes a feature map from the backbone part and upsamples it using three consecutive deconvolutional layers with batch normalization layers and ReLU activation function
A 1-by-1 convolution is applied to the upsampled feature map to produce the 3D heatmaps for each joint.
Performance
- Sequence-wise comparison with state-of-the-art methods on the MuPoTS-3D dataset
-
Joint-wise comparison with stateof-the-art methods on the MuPoTS-3D dataset. All groundtruths are used for evaluation.
-
Pre-trained model of RootNet in here
-
Bounding boxs (from DetectNet and not extended) of Human3.6M and MuPoTS-3D datasets in here. You can use this to test RootNet.
-
Bounding boxs (from DetectNet and extended) and root joint coordinates (from RootNet) of Human3.6M, MSCOCO, and MuPoTS-3D datasets in here. You should not use the bounding boxs of this file to test RootNet because the boxs are extended. Please use the right above one (bounding boxs from DetectNet without bbox extension).
-
Bounding boxs (GT) and root joint coordinates (from RootNet) of 3DPW dataset (only test set) in here. The result is obtained from RootNet trained on MuCo+MSCOCO (without 3DPW training set).
reference : https://github.com/mks0601/3DMPPE_POSENET_RELEASE