-
Notifications
You must be signed in to change notification settings - Fork 144
Allow cross validation with 'bring your own' Lightning models (without ensemble building) #483
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good so far, but please increase test coverage. At the moment, there is no clear proof in the tests that submitting an AML job with a container model and crossval would actually work.
That's a good idea. I thought we had agreed to use unit tests that did not actually run an AML experiment so that the test did not incur a time penalty and a cost. But I think an end-to-end test with known outputs from a known model is a safer bet. I'll do that. |
6b73793
to
87d0e47
Compare
9d97d43
to
dba0588
Compare
This is the 1st commit message: remove exception on lightningcontainer as ensembles with cross-validation This is the commit message #2: Removing unnecessary lambda This is the commit message #3: Removing block for local run This is the commit message #4: Dealing with missing config for hyperdrive in LightningContainer CrossVal This is the commit message #5: Old typo in comment This is the commit message #6: Refactoring hyperdrive cross validation support into WorkflowParams This is the commit message #7: property not blocked fro cross validation This is the commit message #8: inconsistent blank rows This is the commit message #9: tidy up This is the commit message #10: Restoring block on offline segmentation cross validation AzureML is working, but offline is not. How come? The AzureML experiment runs 'offline' on Azure so how does it pass? Anyway, enabling block for now while writing test. This is the commit message #11: Commenting out blocks on offline cross validation for segmentation models When I remove lines 290, 342, and 343 from run_ml.py, i.e. when I remove the blocks on offline cross validation for segmentation models, then I hit the error: 'NoneType' object has no attribute 'number_of_cross_validation_splits' because in line 305, in spawn_offline_cross_val_classification_child_runs: for i in range(self.innereye_config.number_of_cross_validation_splits): self.innereye_config is None This is the commit message #12: Parking offline, since it will only work for 1 GPU anyway This is the commit message #13: unused param better as _ This is the commit message #14: unit test for lightningcontainer get_hyperdrive_config This is the commit message #15: MyPy fix This is the commit message #16: Removing get_total_number_of_cros_validation_runs Remnant of an old feature This is the commit message #17: flake8 fixes This is the commit message #18: Expanding abstract method documentation As per #483 (comment) This is the commit message #19: Reverting, mypy errors How can changing two abstract method's doc-strings cause mypy errors? This is the commit message #20: Removing unneeded Optional This is the commit message #21: We do need that property But fixing typing via an exception feels all wrong This is the commit message #22: Extending abstract method documentation This is the commit message #23: Unit test for cross validation lightning changes to runner This is the commit message #24: Adding cross validation to HelloContainer This is the commit message #25: Correcting HalloContainer xval splits + comments This is the commit message #26: Unit test for xval work in HalloContainer But test not passing MyPy yet, grrrrrrr This is the commit message #27: mypy fixes on unit test This is the commit message #28: updating changelog This is the commit message #29: testing val sets add up correctly This is the commit message #30: finishing documentation This is the commit message #31: adding comments This is the commit message #32: Refactoring HelloDataset To remove clumsy init method parameters This is the commit message #33: homegrown -> sklearn.model_selection.KFold This is the commit message #34: Dropping notimplemented override This is the commit message #35: Restoring override in correct place This is the commit message #36: Refactoring get_parameter_search_hyperdrive_config As per Anton's comment
This is a combination of 36 commits. This is the 1st commit message: remove exception on lightningcontainer as ensembles with cross-validation This is the commit message #2: Removing unnecessary lambda This is the commit message #3: Removing block for local run This is the commit message #4: Dealing with missing config for hyperdrive in LightningContainer CrossVal This is the commit message #5: Old typo in comment This is the commit message #6: Refactoring hyperdrive cross validation support into WorkflowParams This is the commit message #7: property not blocked fro cross validation This is the commit message #8: inconsistent blank rows This is the commit message #9: tidy up This is the commit message #10: Restoring block on offline segmentation cross validation AzureML is working, but offline is not. How come? The AzureML experiment runs 'offline' on Azure so how does it pass? Anyway, enabling block for now while writing test. This is the commit message #11: Commenting out blocks on offline cross validation for segmentation models When I remove lines 290, 342, and 343 from run_ml.py, i.e. when I remove the blocks on offline cross validation for segmentation models, then I hit the error: 'NoneType' object has no attribute 'number_of_cross_validation_splits' because in line 305, in spawn_offline_cross_val_classification_child_runs: for i in range(self.innereye_config.number_of_cross_validation_splits): self.innereye_config is None This is the commit message #12: Parking offline, since it will only work for 1 GPU anyway This is the commit message #13: unused param better as _ This is the commit message #14: unit test for lightningcontainer get_hyperdrive_config This is the commit message #15: MyPy fix This is the commit message #16: Removing get_total_number_of_cros_validation_runs Remnant of an old feature This is the commit message #17: flake8 fixes This is the commit message #18: Expanding abstract method documentation As per #483 (comment) This is the commit message #19: Reverting, mypy errors How can changing two abstract method's doc-strings cause mypy errors? This is the commit message #20: Removing unneeded Optional This is the commit message #21: We do need that property But fixing typing via an exception feels all wrong This is the commit message #22: Extending abstract method documentation This is the commit message #23: Unit test for cross validation lightning changes to runner This is the commit message #24: Adding cross validation to HelloContainer This is the commit message #25: Correcting HalloContainer xval splits + comments This is the commit message #26: Unit test for xval work in HalloContainer But test not passing MyPy yet, grrrrrrr This is the commit message #27: mypy fixes on unit test This is the commit message #28: updating changelog This is the commit message #29: testing val sets add up correctly This is the commit message #30: finishing documentation This is the commit message #31: adding comments This is the commit message #32: Refactoring HelloDataset To remove clumsy init method parameters This is the commit message #33: homegrown -> sklearn.model_selection.KFold This is the commit message #34: Dropping notimplemented override This is the commit message #35: Restoring override in correct place This is the commit message #36: Refactoring get_parameter_search_hyperdrive_config As per Anton's comment Moving more xval methods into LightningContainer import fix resoring get_hyperdrive_config for non lightning models restoring required sampler method too reverting cosmetic change restoring unit test
491fe76
to
6686a1b
Compare
When complete, will be closing #476
Run PyCharm's code cleanup tools on your Python files.(I'm using VSCode)Added/Changed/Removed/... in the "Upcoming" section.
and if needed a motivation why that change was required.