You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I noticed that in the original DiffDock paper, you mentioned "We trained our final score model on four 48GB RTX A6000 GPUs for 850 epochs (around 18 days)." However, in the DiffDock-L paper, there is no mention of the specific GPUs and the time taken for training. I would like to ask about the training process of the workdir/v1.1/score_model you provided. Specifically, how many GPUs were used and how many days did the training take? If possible, could you also provide the training time for each epoch? On my side, it takes at least one and a half hours to train one epoch using only the PDBBind dataset. Is this within expectations?
The text was updated successfully, but these errors were encountered:
Hello! I noticed that in the original DiffDock paper, you mentioned "We trained our final score model on four 48GB RTX A6000 GPUs for 850 epochs (around 18 days)." However, in the DiffDock-L paper, there is no mention of the specific GPUs and the time taken for training. I would like to ask about the training process of the workdir/v1.1/score_model you provided. Specifically, how many GPUs were used and how many days did the training take? If possible, could you also provide the training time for each epoch? On my side, it takes at least one and a half hours to train one epoch using only the PDBBind dataset. Is this within expectations?
The text was updated successfully, but these errors were encountered: