You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, in the current code we use all-candidate evaluation for validation during training, which is much slower than beam-search evaluation (as mentioned in the readme). You can try to disable (use a large --validate-interval) automatic validation during training and manually use evaluate.py to evaluate the checkpoint on the validation set using beam-search evaluation, which is much faster. Or moving further, you can try to move the beam-search evaluation code from eval_utils.py to replace the automatic validation code during training in the valid_step method of vqa_gen.py, which we haven't done yet.
@HiSultryMan Hi, we have updated the code to support beam-search validation during VQA fine-tuning, which will be much faster than the previous all-candidate validation. For more information, please refer to this PR #79 and pull the latest codebase.
No description provided.
The text was updated successfully, but these errors were encountered: