pytorch/docs/source/fsdp.rst
Andrew Gu 9d9267c6f7 [FSDP()][3/N] Refactor public APIs (#87917)
- This PR defines a new `api.py` meant to hold the public API for FSDP (minus `FullyShardedDataParallel` itself). This is needed because several of the `_<...>_utils.py` files rely on the public API, and we cannot import from `torch.distributed.fsdp.fully_sharded_data_parallel` without a circular import. Calling the file `api.py` follows the convention used by `ShardedTensor`.
- This PR cleans up the wording in the `BackwardPrefetch`, `ShardingStrategy`, `MixedPrecision`, and `CPUOffload` docstrings.
- This PR adds the aforementioned classes to `fsdp.rst` to have them rendered in public docs.
- To abide by the public bindings contract (`test_public_bindings.py`), the aforementioned classes are removed from `fully_sharded_data_parallel.py`'s `__all__`. This is technically BC breaking if someone uses `from torch.distributed.fsdp.fully_sharded_data_parallel import *`; however, that does not happen in any of our own external or internal code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87917
Approved by: https://github.com/mrshenli
2022-10-31 16:45:21 +00:00

19 lines
430 B
ReStructuredText

FullyShardedDataParallel
========================
.. automodule:: torch.distributed.fsdp
.. autoclass:: torch.distributed.fsdp.FullyShardedDataParallel
:members:
.. autoclass:: torch.distributed.fsdp.BackwardPrefetch
:members:
.. autoclass:: torch.distributed.fsdp.ShardingStrategy
:members:
.. autoclass:: torch.distributed.fsdp.MixedPrecision
:members:
.. autoclass:: torch.distributed.fsdp.CPUOffload
:members: