onnxruntime/orttraining/orttraining/python/training/optim
Justin Chu d79515041c
[Better Engineering] Bump ruff to 0.0.278 and fix new lint errors (#16789)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #16789

Bump ruff to 0.0.278 and fix new lint errors. I added noqa to all
existing RUF012 errors which requires mutable class variables to be
annotated with `ClassVar`, as well as all PERF issues.

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-07-21 12:53:41 -07:00
..
__init__.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
_apex_amp_modifier.py Enable pylint and numpy rules (#15218) 2023-03-27 20:37:53 -07:00
_ds_modifier.py support latest deepspeed version for optim (#15682) 2023-04-25 20:12:23 -07:00
_megatron_modifier.py [Better Engineering] Bump ruff to 0.0.278 and fix new lint errors (#16789) 2023-07-21 12:53:41 -07:00
_modifier.py Replace call to deprecated torch.norm (#16758) 2023-07-20 19:52:19 -07:00
_modifier_registry.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
_multi_tensor_apply.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
config.py [Better Engineering] Bump ruff to 0.0.278 and fix new lint errors (#16789) 2023-07-21 12:53:41 -07:00
fp16_optimizer.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
fused_adam.py Adding this set_to_none flag to zero_grad to have signature parity with pytorch Adam (#16375) 2023-06-19 17:27:41 -07:00
lr_scheduler.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00