-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
relatively large performance gap on ScanObjectNN #13
Comments
Hey, Thanks for raising the issue. I would like to inform that my schedule is bit tight these days and it might take some time to re-run these experiments. However, I'll respond to you as soon as I'm available. Having said that, can you give me some clarifications here.
Thanks. |
Answer to Q1: Yes, I use the same ScanObjectNN as you provide. Answer to Q2: I train the model from scratch then evalute the results. I do not use the pre-trained model. More information: I pre-trained the model on 6 GPUs powered by DistributedDataParallel mode. After that, few-shot learning was conduct on a single GPU. And the experiments of linear SVM classification also took place on a single GPU device. Feel free to ask me to add more clarifications. Thanks. |
@auniquesun @MohamedAfham |
I am not sure whether it is normal, but I can tell you my situation: with 6 RTX 2080Ti GPUs, pretraining on ShapeNetRender takes about 6.5 hours. |
@MohamedAfham Recently, I have run all experiments in the codebase at least 3 times to ensure there are not explicit exceptions during my operations.
Some of the results are very encouraging, which means they are comparable with the paper reported, sometimes even higher than that in the paper, e.g. the reproduced results on ModelNet. But some are not.
Specifically, for the downstream task few-shot classification on ScanObjectNN, the performance gap is relatively large, e.g.,
72.5 ± 8.33
,82.5 ± 5.06
,59.4 ± 3.95
,67.8 ± 4.41
For the downstream task linear SVM classification on ScanObjectNN, the reproduced performance is
75.73%
. All experiments use the DGCNN backbone and default settings except for the batch size.In short, all of results are behind the reported peformances on
ScanObjectNN
in the paper, by a large margin.At this point, I wonder whether there are some precautions when experimenting on ScanObjectNN, and what possible reasons are. Can you provide some suggestions? thank you.
The text was updated successfully, but these errors were encountered: