pytorch/torch/optim
Masaki Kozuki 5f26df0345 resubmit: "resubmit: [mta] APEX style Fused Adam (#81705) (#85507)" (#85739)
Embarrassingly move the pow implementations around [ATen/native/cuda/PowKernel.cu#L21-L66](849b08f14b/aten/src/ATen/native/cuda/PowKernel.cu (L21-L66)) to a new header file and let FusedAdam use them to tame MSVC, hopefully.

cc @ngimel @ptrblck
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85739
Approved by: https://github.com/ngimel
2022-09-29 16:58:59 +00:00
..
_multi_tensor
__init__.py
__init__.pyi
_functional.py
adadelta.py
adadelta.pyi
adagrad.py
adagrad.pyi
adam.py resubmit: "resubmit: [mta] APEX style Fused Adam (#81705) (#85507)" (#85739) 2022-09-29 16:58:59 +00:00
adam.pyi
adamax.py
adamax.pyi
adamw.py
adamw.pyi
asgd.py
asgd.pyi
lbfgs.py
lbfgs.pyi
lr_scheduler.py CyclicLR memory leak fix (#85462) 2022-09-27 17:41:58 +00:00
lr_scheduler.pyi CyclicLR memory leak fix (#85462) 2022-09-27 17:41:58 +00:00
nadam.py
nadam.pyi
optimizer.py [Profiler] tracking Optimizer (part 2 of Record Optimizer) (#84920) 2022-09-28 02:48:07 +00:00
optimizer.pyi
radam.py
radam.pyi
rmsprop.py
rmsprop.pyi
rprop.py
rprop.pyi
sgd.py
sgd.pyi
sparse_adam.py
sparse_adam.pyi
swa_utils.py
swa_utils.pyi