-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
rosnode - rgbd_hand_gesture_recognition.py - parameter #303
Comments
@thomaspeyrucain , I have created the branch named |
Hello @minhquoc0712 , |
Can you describe your problem when running the node? |
There are no error messages, I just cannot succeed in getting an output on the /opendr/gestures topic, Is this tool only available for the Kinect? We have a depth image in a level of grey as you can see in the first message that I sent; Could it work with this? |
Are you using the newest version of the node? Since after we merge the fix branch into |
Yes |
@thomaspeyrucain, can you try the node I just updated in this branch, and maybe modify the |
@minhquoc0712 Yes thanks now I get the output:
I cannot find in the documentation what ID correspond what gesture |
@minhquoc0712 @thomaspeyrucain do you think we could add the id-gesture correspondence in the learner doc, similar to what i did in semantic segmentation? |
@tsampazk Yes would be perfect ^^ |
Hi, I am not the person implementing the algorithm, but I think I can use the information from the file |
Thanks @minhquoc0712, i think you can add it directly in #343. |
Hi @thomaspeyrucain, I have updated id-class information in the document. Can you check if the algorithm working properly? |
Hello @minhquoc0712 , Also, It is often giving a detection of the ID 8 punch with a lot of confidence even if nothing is detected |
Hi @thomaspeyrucain, I have updated the document with example gestures from the paper. About the second question, I cannot answer it since I did not implement this algorithm from the beginning. Maybe I will discuss it with my team, or you can bring it up at the technical meeting. |
Hi @thomaspeyrucain I am also not the person who implemented the algorithm, but I worked with the provided trained model for the demo last year and I found it to be pretty sensitive wrt distance/position of person/etc. It is due to the fact that the dataset it was trained on is pretty small and does not contain "in-the-wild" images, but only images of several people in a similar lab environment. You can see some examples of the dataset images in https://github.com/opendr-eu/opendr/blob/master/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/ or full dataset in https://data.mendeley.com/datasets/ndrczc35bt/1. For me the model worked better when making sure the inference setup is similar to the train dataset, i.e., frontal view, person in the center, etc. An example here: https://tuni-my.sharepoint.com/:v:/g/personal/kateryna_chumachenko_tuni_fi/Ef_snAWKevRBl9Z6hIvJwysB2W0WabuL-IjpxOmmnKtb0Q?e=4xe56R |
Parameters need to be consistent with other tools with agparser
I am using opendr installed on my computer on the develop branch, I am feeding the rgb camera topic and the depth_image like this one:
I cannot get an output from the /opendr/gestures topic
Is the depth_image topic different from the one that you are using?
The text was updated successfully, but these errors were encountered: