Skip to content

[WIP] Intermediate Tutorial #30

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from
Draft

[WIP] Intermediate Tutorial #30

wants to merge 10 commits into from

Conversation

romesco
Copy link
Contributor

@romesco romesco commented Oct 26, 2020

No description provided.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 30, 2020
@romesco
Copy link
Contributor Author

romesco commented Oct 30, 2020

@tkornuta-nvidia I'm testing the DataLoaderConf class - seems to be working fine!

What are the thoughts on supporting the Dataset class as well as MNIST from torchvision? I think we have yet to settle on how we expect to structure the repo when allowing classes from another project.

By the way, one other thing I've noticed:

Logic like this, IMO, doesn't really deserve to live in the main file:

 if use_cuda:
        cuda_kwargs = {"num_workers": 1, "pin_memory": True, "shuffle": True}
        train_kwargs.update(cuda_kwargs)
        test_kwargs.update(cuda_kwargs)

What would be really cool is to support a conditional config. I know that's not possible with hydra at the moment, but something like:

@dataclass
class MNISTConf:
    no_cuda: bool = False
    ...
    checkpoint_name: str = "unnamed.pt"
    data_train: DataLoaderConf = DataLoaderConf(
        shuffle = True,
        num_workers = 0 if {no_cuda} else 1,
        pin_memory = False
    )
    data_test: DataLoaderConf = DataLoaderConf(
        shuffle = False,
        num_workers = 0 if {no_cuda} else 1
    )
    model: MNISTNetConf = MNISTNetConf()
    ...
    )

@omry
Copy link
Contributor

omry commented Oct 31, 2020

@tkornuta-nvidia I'm testing the DataLoaderConf class - seems to be working fine!

What are the thoughts on supporting the Dataset class as well as MNIST from torchvision? I think we have yet to settle on how we expect to structure the repo when allowing classes from another project.

By the way, one other thing I've noticed:

Logic like this, IMO, doesn't really deserve to live in the main file:

 if use_cuda:
        cuda_kwargs = {"num_workers": 1, "pin_memory": True, "shuffle": True}
        train_kwargs.update(cuda_kwargs)
        test_kwargs.update(cuda_kwargs)

Yeah, this is definitely not great.
Maybe this will help, but it's a big hammer.
There is probably a better way to model this for this case.

@romesco romesco linked an issue Dec 17, 2020 that may be closed by this pull request
@romesco romesco self-assigned this Dec 17, 2020
Comment on lines +10 to 13
###### HYDRA BLOCK ###### # noqa: E266
import hydra
from hydra.core.config_store import ConfigStore
from dataclasses import dataclass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this HYDRA BLOCK this is a good idea.
imports should be sorted normally. We can call out in the text what things are specific to Hydra.
I wouldn't like to see people copying this HYDRA BLOCK as if it's somehow needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm ok. I was thinking its nice to fence off the code so it is clear what the diffs are. Maybe I can instead annotate it as a 'DIFF' block.

@@ -32,13 +31,13 @@ class MNISTConf:
adadelta: AdadeltaConf = AdadeltaConf()
steplr: StepLRConf = StepLRConf(
step_size=1
) # we pass a default for step_size since it is required, but missing a default in PyTorch (and consequently in hydra-torch)
) # we pass a default for step_size since it is required, but missing a default in PyTorch (and consequently in hydra-torch) # noqa: E501
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just break the comment into multiple lines?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[tutorial] Intermediate MNIST
3 participants