pytorch/docs/source/notes
cpatru 6d896cb545 Update faq.rst so OOM section mentions checkpoint (#62709)
Summary:
This FAQ has a section for CUDA OOMs where there are lots of don'ts. This limits modeling solution. Deep nets can blow up memory due to output caching during training.
It's a known problem with a known solution: to trade-off compute for memory via checkpointing.
FAQ should mention it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62709

Reviewed By: nairbv

Differential Revision: D30103326

Pulled By: ezyang

fbshipit-source-id: 3a8b465a7fbe19aae88f83cc50fe82ebafcb56c9
2021-08-05 07:40:08 -07:00
..
amp_examples.rst
autograd.rst clarify default value of requires_grad for tensors (#61038) 2021-07-12 12:57:37 -07:00
broadcasting.rst
cpu_threading_runtimes.svg
cpu_threading_torchscript_inference.rst
cpu_threading_torchscript_inference.svg
cuda.rst Add docstrings for save_on_cpu hooks (#62410) 2021-08-03 17:53:45 -07:00
ddp.rst
extending.rst
faq.rst Update faq.rst so OOM section mentions checkpoint (#62709) 2021-08-05 07:40:08 -07:00
gradcheck.rst
hip.rst Add note on torch.distributed backends on ROCm (#58975) 2021-07-10 03:51:19 -07:00
large_scale_deployments.rst
modules.rst
multiprocessing.rst
randomness.rst
serialization.rst
windows.rst