-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Performance Benchmarks #37
Comments
How much is the overhead compared to other regularization methods, and how fast is the trained forward methods afterwards? Those are the two factors to understand. Let's get @jessebet on here to start discussing as well. |
Updated the pdf with the FFJORD results on the gaussian dataset:
The MNIST classification shows similar trend with a reduced training time when the tolerance is low. At lower tolerance, the training timing is slower. The overall inference time is still lower than unregularized one. |
Regarding performance comparisons with other methods. The papers report GPU training times, but currently I am not being able to get the regularization to work on GPUs #42. Once jacobjinkelly/easy-neural-ode#2 gets fixed, I can run these on CPU and measure the performance. |
Proper Benchmarks available in the paper |
When using regularization, the training doesn't scale very well with the increase in batch size:
All the benchmarks use the models from here and use Tracker.jl
The text was updated successfully, but these errors were encountered: