diff --git a/Readme.md b/Readme.md index c8e2da7a..5c76a56c 100644 --- a/Readme.md +++ b/Readme.md @@ -94,7 +94,7 @@ This part is the basic code for fitting SMPL[^loper2015] with 2D keypoints estim

- Novel view synthesis for human interaction(coming soon) + Novel view synthesis for human interaction
@@ -102,17 +102,24 @@ This part is the basic code for fitting SMPL[^loper2015] with 2D keypoints estim With our proposed method, we release two large dataset of human motion: LightStage and Mirrored-Human. See the [website](https://chingswy.github.io/Dataset-Demo/) for more details. -If you would like to download the ZJU-Mocap dataset, please sign the [agreement](https://zjueducn-my.sharepoint.com/:b:/g/personal/pengsida_zju_edu_cn/EbeMCvja4VNJmgi79dASTo8ByeNm3xdCPetBlHW3aeE6gQ?e=pH8pjX), and email it to Qing Shuai (s_q@zju.edu.cn) and cc Xiaowei Zhou (xwzhou@zju.edu.cn) to request the download link. +If you would like to download the ZJU-Mocap dataset, please sign the [agreement](https://pengsida.net/project_page_assets/files/ZJU-MoCap_Agreement.pdf), and email it to Qing Shuai (s_q@zju.edu.cn) and cc Xiaowei Zhou (xwzhou@zju.edu.cn) to request the download link.
-
+
+
LightStage: captured with LightStage system
- -
-
+
+
Mirrored-Human: collected from the Internet
+
+ +Many works have achieved wonderful results based on our dataset: + +- [Real-time volumetric rendering of dynamic humans](https://real-time-humans.github.io/) +- [CVPR2022: HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video](https://grail.cs.washington.edu/projects/humannerf/) +- [ECCV2022: KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints](https://markomih.github.io/KeypointNeRF/) ## Other features