Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

The Ap of batchsize 16 in one gpu #29

Open
ray-peng opened this issue Jul 4, 2023 · 1 comment
Open

The Ap of batchsize 16 in one gpu #29

ray-peng opened this issue Jul 4, 2023 · 1 comment

Comments

@ray-peng
Copy link

ray-peng commented Jul 4, 2023

excuse me , i have retrained the model in one gpu with 16 batchsize, but i got the lower ap than paper. i have read the issue similar with my probelm. But i don't know how to adjust the learning rate, can you give me some advice? #5

@SPengLiang
Copy link
Owner

Sorry for the late reply. You can increase the learning rate and the training epochs, and the learning rate decay strategy should be adjusted correspondingly.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants