Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Performance in Hand-Object Interaction Tasks #27

Open
Bokai-Ji opened this issue Aug 12, 2024 · 1 comment
Open

Performance in Hand-Object Interaction Tasks #27

Bokai-Ji opened this issue Aug 12, 2024 · 1 comment

Comments

@Bokai-Ji
Copy link

Hi @ChanglongJiangGit ,

Thank you for your excellent work!

I have a couple of questions regarding the model's performance and application. First, how well does the model perform in hand-object interaction scenarios? Additionally, could you provide some guidance on setting up the pipeline for inference on custom datasets?

I appreciate any insights you can share.

@ChanglongJiangGit
Copy link
Owner

Thanks for your attention!

First, A2J-Transformer can be applied to hand-object interaction datasets like HO-3D. To write the dataloader, you can just follow Keypoint-Transformer(CVPR'22).
Second, to write the dataloader on custom datasets, please just give the model "input_img", and A2J-Transformer can output the 2.5D coordinates of the image. Thus the 2D and 3D(relative to the root joint) coordinates can be visualized.

Hope this will help you!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants