pytorch/test/cpp
Jane Xu b90496eef5 [nn] zero_grad() set_to_none default True (#92731)
Attempts to fix #92656

BC-breaking! This changes the default of zero_grad in optim and in nn to default set grads to None instead of zero tensors. We are changing the default because there are proven perf wins and existing code has typically not regressed due to this change. (will probably have to flesh out this note more).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92731
Approved by: https://github.com/ngimel
2023-01-26 01:04:28 +00:00
..
api [nn] zero_grad() set_to_none default True (#92731) 2023-01-26 01:04:28 +00:00
c10d Allow Process Group to support multiple backends (#88330) (#90997) 2022-12-16 23:15:00 +00:00
common
dist_autograd set -Wsuggest-override for builds (#89852) 2022-12-19 22:08:47 +00:00
jit Implement SymBool (#92149) 2023-01-21 02:21:56 +00:00
lazy Revert "Remove deprecated torch.symeig (#70988)" 2023-01-24 19:03:40 +00:00
lite_interpreter_runtime [Vulkan + Profiler] Add Timestamp Adjustment Algorithm (#90672) 2022-12-19 20:01:07 +00:00
monitor
profiler [Profiler] Fix SOFT_ASSERT test to not raise on debug builds (#91464) 2022-12-30 05:31:03 +00:00
rpc
tensorexpr set -Wsuggest-override for builds (#89852) 2022-12-19 22:08:47 +00:00
__init__.py