-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Update performance of video models #256
Update performance of video models #256
Conversation
Codecov Report
@@ Coverage Diff @@
## master #256 +/- ##
=======================================
Coverage 85.08% 85.08%
=======================================
Files 81 81
Lines 5276 5276
Branches 849 849
=======================================
Hits 4489 4489
Misses 648 648
Partials 139 139
Flags with carried forward coverage won't be shown. Click here to find out more. Continue to review full report at Codecov.
|
@@ -93,7 +93,7 @@ | |||
pipeline=test_pipeline)) | |||
# optimizer | |||
optimizer = dict( | |||
type='SGD', lr=0.1, momentum=0.9, | |||
type='SGD', lr=0.3, momentum=0.9, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double check that this lr is for 8 gpus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is for 8 gpu (since we use batch size 24 on each GPU
@@ -74,7 +74,7 @@ | |||
dict(type='ToTensor', keys=['imgs']) | |||
] | |||
data = dict( | |||
videos_per_gpu=8, | |||
videos_per_gpu=24, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v100?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, slowfast_4x16 doesn't consume lots of memory, we can fit 24 samples onto a 1080 Ti.
scales=(1, 0.875, 0.75, 0.66), | ||
random_crop=False, | ||
max_wh_scale_gap=1), | ||
dict(type='RandomResizedCrop'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it mean that RandomResizedCrop is better than MultiScaleCrop in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, we already validated that RandomResizedCrop can outperform MultiScaleCrop. The contribution of that PR is to show that training with videos doesn't lead to any performance drop.
Before renaming, this config is named tsn_r50_video_1x1x3_100e_kinetics400_rgb.py
, and there is no performance score for that config in README.
Update the performance of video models.
This PR validates that the performance trained with video clips or raw frames has no significant difference.