How does self supervised training work ? #1882
Unanswered
PushpakBhoge512
asked this question in
Q&A
Replies: 0 comments
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
-
I am struggling to grasp the idea of supervised training. I get the idea we put some kind of image construction/unmixing/similarity loss to train the model. but my confusion occurs from a data preparation point of view, most of the dataset config I see in the self-supervised section has some classes and also has train/val splits
do I need labels for this self-supervised training? or
do I just need two folders train/val images will be put in these folders at the root without any label folder and this num_classes setting gets ignored.
also, most of the configs are for transformer-based architecture can I just swap the backbone to ResNet50 or any other backbone and expect it to work
also, I want to pre-train ResNet50 on a large number of unlabeled images using this self-supervised technique and then use those weights in the mmdet model can I do that?
if there is any guide already written for this that would be super useful
Beta Was this translation helpful? Give feedback.
All reactions