Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

How to interpret the evaluation results provided by the program? #21

Open
mate-huaboy opened this issue Jun 21, 2023 · 2 comments
Open

Comments

@mate-huaboy
Copy link

Hi,thanks for your great work!
I have some problem when I run your test code. Firstly, I followed your instructions to organize the relevant data. Then, using the model parameters and code provided by you, I evaluated it on the Tless dataset. In the end, I found that the program provided an ADD score of 76.3 and a GADD score of 10.14, which don't seem to correspond to the values mentioned in the paper. I'm not sure where I went wrong. Here is a partial output from the program.

29
**** add: 0.00, adds: 0.00, add(-s): 0.00
<2cm add: 0.00, adds: 0.00, add(-s): 0.00
30
**** add: 2.58, adds: 2.58, add(-s): 2.58
<2cm add: 0.00, adds: 0.00, add(-s): 0.00
Average of all object:
**** add: 6.42, adds: 6.42, add(-s): 6.42
<2cm add: 0.01, adds: 0.01, add(-s): 0.01
All object (following PoseCNN):
**** add: 10.14, adds: 10.14, add(-s): 10.14
<2cm add: 0.03, adds: 0.03, add(-s): 0.03
2023-06-21 09:58:13,182 : TEST ENDING:  add_loss:0.1083 add:0.2023 gadd_loss:0.0000 gadd:0.0000 add_auc:76.2534 gadd_auc:10.1450

I don't fully understand what the add_loss, add, gadd_loss, gadd, add_auc, and gadd_auc represent in the final program output. Could you please explain them in detail?
Additionally, your paper also mentions other metrics like VSD. Does the open-source code include evaluation code for these metrics as well? Thank you very much for your response.

@mate-huaboy
Copy link
Author

Hi,The aforementioned results were obtained on the "correct_symmetry_ids" branch. When I conducted the same test on the "master" branch, I observed that the outcomes closely resembled those mentioned in your paper: the program yielded the following output, TEST ENDING: add_loss:0.0265 add:0.0545 gadd_loss:0.0000 gadd:0.0000 add_auc:93.4679 gadd_auc:81.9997.
However, I have an additional inquiry: Would it be possible to publicly share the evaluation code for the metrics mentioned in other papers? Such a provision would greatly facilitate our ability to make more accurate comparisons.Thanks

@GANWANSHUI
Copy link
Owner

We did not compare with other metrics except the ADD(S), this benchmark may provide the related code for evaluation on other metrics.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants