pytorch/docs/source/notes
Codrin Popa d401732baa Added roundup_bypass_threshold_mb knobs to the PyTorch Caching Allocator (#85940)
Summary:
Added an additional roundup knob( ``roundup_bypass_threshold_mb``) to bypass rounding the requested allocation size, for allocation requests larger than the threshold value (in MB). This can help reduce the memory footprint when making large allocations that are expected to be persistent or have a large lifetime.

Differential Revision: D39868104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85940
Approved by: https://github.com/zdevito
2022-10-03 16:56:22 +00:00
..
amp_examples.rst
autograd.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
broadcasting.rst
cpu_threading_runtimes.svg
cpu_threading_torchscript_inference.rst
cpu_threading_torchscript_inference.svg
cuda.rst Added roundup_bypass_threshold_mb knobs to the PyTorch Caching Allocator (#85940) 2022-10-03 16:56:22 +00:00
ddp.rst
extending.rst
faq.rst
gradcheck.rst
hip.rst
large_scale_deployments.rst
modules.rst
mps.rst
multiprocessing.rst
numerical_accuracy.rst [CUBLAS][TF32][CUDNN] Update numerical_accuracy.rst (#79537) 2022-09-07 18:30:26 +00:00
randomness.rst
serialization.rst
windows.rst