-
Notifications
You must be signed in to change notification settings - Fork 455
AutoYAHP Part 1: Cleanup the Algorithms for AutoYAHP #1056
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ravi-mosaicml
merged 44 commits into
mosaicml:dev
from
ravi-mosaicml:remove_hparams_from_tests
May 25, 2022
Merged
AutoYAHP Part 1: Cleanup the Algorithms for AutoYAHP #1056
ravi-mosaicml
merged 44 commits into
mosaicml:dev
from
ravi-mosaicml:remove_hparams_from_tests
May 25, 2022
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Treat all python warnings as errors in tests. Existing tests that were throwing warnings were either fixed (yay!) or modified with `@pytest.mark.filterwarnings` - In BlurPool, throwing an error if both `replace_maxpools` and `replace_convs` are False, as that results in the algorithm being a no-op. - Made the default optimizer warning in the trainer less verbose. - Converted the bert yaml to have duration specified in terms of samples, to fix an issue where the warmup period, combine with max duration, was a noop. - Moved `TestMetricSetter` to `tests/common` and renamed as `MetricSetterCallback`. Tests should not be importing from other test files (tests/common is OK) - Removed TestWandBLogger, since the trainer tests do the same thing
Fix progressive resizing
This PR refactors the algorithms and tests as will be required by AutoYAHP. It does not depend on AutoYAHP itself (a future PR will remove the underlying hparam classes). - Refactored algorithm tests to not depend on hparams - Reformatted the factorize and selective backprop docstrings so they would be correctly parsed by auto-yahp - Refactor `algorithm_settings.py` to not depend on hparams and to return a list of `pytest.param` objects for a `pytest.mark.parametrize`. This change makes it more re-usable since it now includes information about markers required for each algorithm. - Moved the `TestTrainerAlgorithms` into `tests/algorithms/test_algorithms_train.py`, since it tests the individual algorithms, not the trainer, and thus should live in `tests/algorithms`. - Add helper methods for scanning a module to discover subclass implementations, check that the registry contains an entry, and test that a class is constructable from yaml
Silencing deepspeed deprecation warnings
Bump pyright
hanlint
approved these changes
May 20, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, one comment on the algorithm_settings pattern, but that is not blocking and could be cleaned up later.
ravi-mosaicml
added a commit
that referenced
this pull request
May 26, 2022
Similar to #1056, this PR cleans up the callbacks and loggers for AutoYAHP. It does not depend on AutoYAHP itself (a future PR will remove the underlying hparam classes). - Refactored callback and logger tests to not depend on hparams - Reformatted the docstrings so they would be correctly parsed by auto-yahp - Added `callback_settings.py`, similar to `algorithm_settings.py`, to return a list of pytest param objects for parameterization across callback tests. These param objects include appropriate markers (e.g. conditional skipping for wandb and mlperf; requiring that the memory monitor runs on GPU, ...) - Moved the `TestTrainerAssets` into `tests/callbacks/test_callbacks.py`, since it tests the individual callbacks and loggers, not the trainer, and thus should live in `tests/callbacks`. - Cleaned up the `MemoryMonitor` warnings when cuda is not available. Now, it warns when the model is not on cuda, to catch the edge case where one does CPU training when GPUs are available.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR refactors the algorithms and tests as will be required by AutoYAHP. It does not depend on AutoYAHP itself (a future PR will remove the underlying hparam classes).