pytorch/torch/optim
F-G Fernandez 7243264c61 fix: Allowed optimizers with more than 2 betas (#84486)
Hello there 👋

As discussed in #84485, this PR enables more flexibility on the optimizers that are wrapped by LR schedulers in PyTorch. Currently, it is incompatible with optimizers that have a number of betas different than 2. This PR fixes that with minimal modifications.

Fixes #84485

Any feedback is welcome!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84486
Approved by: https://github.com/Lezcano, https://github.com/soulitzer
2022-09-06 19:24:10 +00:00
..
_multi_tensor
__init__.py
__init__.pyi
_functional.py
adadelta.py
adadelta.pyi
adagrad.py
adagrad.pyi
adam.py Make sure that we can load old optimizer checkpoint (#83588) 2022-08-17 15:08:05 +00:00
adam.pyi
adamax.py
adamax.pyi
adamw.py
adamw.pyi
asgd.py [optim] asgd : handle complex params as independent real params (#84472) 2022-09-06 16:58:42 +00:00
asgd.pyi
lbfgs.py
lbfgs.pyi
lr_scheduler.py fix: Allowed optimizers with more than 2 betas (#84486) 2022-09-06 19:24:10 +00:00
lr_scheduler.pyi
nadam.py
nadam.pyi
optimizer.py
optimizer.pyi
radam.py
radam.pyi
rmsprop.py [optim] rmsprop: handle complex params as independent real params (#83860) 2022-08-22 21:55:01 +00:00
rmsprop.pyi
rprop.py [optim] rprop: handle complex params as independent real params (#83858) 2022-08-23 08:39:35 +00:00
rprop.pyi
sgd.py Make sure that we can load old optimizer checkpoint (#83588) 2022-08-17 15:08:05 +00:00
sgd.pyi
sparse_adam.py
sparse_adam.pyi
swa_utils.py
swa_utils.pyi