mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
This diff introduces a change to the `all_reduce_coalesced` function in `distributed_c10d.py`. The function now accepts a single tensor as well as a list of tensors. This allows for more flexibility in the use of the function. This is just a syntax sugar for the compiler to use `all_reduce_coalesced` without worrying about converting the input to a list. Differential Revision: [D51433236](https://our.internmc.facebook.com/intern/diff/D51433236/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/115650 Approved by: https://github.com/wconstab ghstack dependencies: #115523, #115302, #115648, #115649 |
||
|---|---|---|
| .. | ||
| _composable | ||
| _shard | ||
| _sharded_tensor | ||
| _sharding_spec | ||
| _spmd | ||
| _tensor | ||
| _tools | ||
| algorithms | ||
| autograd | ||
| benchmarks | ||
| checkpoint | ||
| elastic | ||
| examples | ||
| fsdp | ||
| launcher | ||
| nn | ||
| optim | ||
| pipeline | ||
| rpc | ||
| tensor | ||
| __init__.py | ||
| _composable_state.py | ||
| _functional_collectives.py | ||
| _functional_collectives_impl.py | ||
| _state_dict_utils.py | ||
| argparse_util.py | ||
| c10d_logger.py | ||
| collective_utils.py | ||
| constants.py | ||
| CONTRIBUTING.md | ||
| device_mesh.py | ||
| distributed_c10d.py | ||
| launch.py | ||
| logging_handlers.py | ||
| remote_device.py | ||
| rendezvous.py | ||
| run.py | ||
| utils.py | ||