pytorch/torch/distributed/algorithms/ddp_comm_hooks
Vishwa Raj Singh 9d7a0869f0 Make DDP Quantization hooks backend Agnostic (#138816)
Current ddp hooks quantization code use .cuda() API to move tensors and parameter on backend devices. This limits only cuda backend to work with ddp quantization hooks.
Change is to make code backend agnostic and move tensors/parameters based on **tensor.device.**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138816
Approved by: https://github.com/kwen2501
2024-10-29 15:02:45 +00:00
..
__init__.py
ddp_zero_hook.py
debugging_hooks.py
default_hooks.py
mixed_precision_hooks.py
optimizer_overlap_hooks.py
post_localSGD_hook.py
powerSGD_hook.py
quantization_hooks.py Make DDP Quantization hooks backend Agnostic (#138816) 2024-10-29 15:02:45 +00:00