pytorch/docs/source
Jane Xu a348975e00 Add opteinsum backend to give users control (#86219)
This achieves the same things as https://github.com/pytorch/pytorch/pull/85908 but using backends instead of kwargs (which breaks torchscript unfortunately). This also does mean we let go of numpy compatibility BUT the wins here are that users can control what opt einsum they wanna do!

The backend allows for..well you should just read the docs:
```
.. attribute::  torch.backends.opteinsum.enabled

    A :class:`bool` that controls whether opt_einsum is enabled (on by default). If so,
    torch.einsum will use opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html)
    to calculate an optimal path of contraction for faster performance.

.. attribute::  torch.backends.opteinsum.strategy

    A :class:`str` that specifies which strategies to try when `torch.backends.opteinsum.enabled` is True.
    By default, torch.einsum will try the "auto" strategy, but the "greedy" and "optimal" strategies are
    also supported. Note that the "optimal" strategy is factorial on the number of inputs as it tries all
    possible paths. See more details in opt_einsum's docs
    (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html).
```

In trying (and failing) to land 85908, I discovered that jit script does NOT actually pull from python's version of einsum (because it cannot support variadic args nor kwargs). Thus I learned that jitted einsum does not subscribe to the new opt_einsum path calculation. Overall, this is fine since jit script is getting deprecated, but where is the best place to document this?

## Test plan:
- added tests to CI
- locally tested that trying to set the strategy to something invalid will error properly
- locally tested that tests will pass even if you don't have opt-einsum
- locally tested that setting the strategy when opt-einsum is not there will also error properly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86219
Approved by: https://github.com/soulitzer, https://github.com/malfet
2022-10-05 06:33:25 +00:00
..
_static
_templates
community [skip-ci] Fixed bad link in build_ci_governance.rst (#85933) 2022-10-03 17:35:44 +00:00
elastic Add watchdog to TorchElastic agent and trainers (#84081) 2022-09-07 00:17:20 +00:00
notes Added roundup_bypass_threshold_mb knobs to the PyTorch Caching Allocator (#85940) 2022-10-03 16:56:22 +00:00
rpc
scripts [ONNX] Update ONNX documentation to include unsupported operators (#84496) 2022-09-16 23:48:37 +00:00
amp.rst Remove deprecated torch.matrix_rank (#70981) 2022-09-22 17:40:46 +00:00
autograd.rst Change torch.autograd.graph.disable_saved_tensors_hooks to be public API (#85994) 2022-10-03 16:25:01 +00:00
backends.rst Add opteinsum backend to give users control (#86219) 2022-10-05 06:33:25 +00:00
benchmark_utils.rst
bottleneck.rst add itt unit test and docstrings (#84848) 2022-09-28 01:39:58 +00:00
checkpoint.rst
complex_numbers.rst
conf.py Add user facing documentation for CSAN (#84689) 2022-09-09 15:29:34 +00:00
config_mod.rst
cpp_extension.rst
cpp_index.rst
cuda._sanitizer.rst Rework printing tensor aliases in CSAN error message (#85008) 2022-09-21 13:41:52 +00:00
cuda.rst Add user facing documentation for CSAN (#84689) 2022-09-09 15:29:34 +00:00
cudnn_persistent_rnn.rst
cudnn_rnn_determinism.rst
data.rst Extend collate function that can register collate functions to handle specific types (#85748) 2022-09-30 13:30:18 +00:00
ddp_comm_hooks.rst
deploy.rst
distributed.algorithms.join.rst
distributed.elastic.rst
distributed.optim.rst
distributed.rst [c10d] Start deprecating *_multigpu APIs (#85961) 2022-10-01 00:59:39 +00:00
distributions.rst
dlpack.rst
docutils.conf
fft.rst
fsdp.rst
futures.rst
fx.rst
hub.rst
index.rst Add torch.nested namespace (#84102) 2022-09-12 16:31:05 +00:00
jit.rst
jit_builtin_functions.rst
jit_language_reference.rst
jit_language_reference_v2.rst Fix typos in docs (#80602) 2022-08-29 23:32:44 +00:00
jit_python_reference.rst
jit_unsupported.rst
jit_utils.rst
library.rst
linalg.rst
masked.rst [maskedtensor] first commit, core and creation (#82836) 2022-08-16 20:10:34 +00:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
monitor.rst
multiprocessing.rst
name_inference.rst
named_tensor.rst
nested.rst Add python nested_tensor and as_nested_tensor constructors in torch.nested (#85593) 2022-09-28 20:15:02 +00:00
nn.functional.rst
nn.init.rst
nn.rst
onnx.rst [ONNX] Update user documentation (#85819) 2022-09-30 19:35:34 +00:00
onnx_supported_aten_ops.rst [ONNX] Update ONNX documentation to include unsupported operators (#84496) 2022-09-16 23:48:37 +00:00
optim.rst feat: add PolynomialLR scheduler (#82769) 2022-08-10 18:21:00 +00:00
package.rst
pipeline.rst
profiler.rst Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199) 2022-10-04 21:57:05 +00:00
quantization-accuracy-debugging.rst
quantization-backend-configuration.rst
quantization-support.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
quantization.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
random.rst
rpc.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
sparse.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
special.rst [primTorch] special: j0, j1, spherical_j0 (#86049) 2022-10-04 18:21:46 +00:00
storage.rst Fix typos in docs (#80602) 2022-08-29 23:32:44 +00:00
tensor_attributes.rst [docs] Add `torch.channels_last_3d (#85888) 2022-10-03 17:32:07 +00:00
tensor_view.rst
tensorboard.rst
tensors.rst Remove deprecated torch.lstsq (#70980) 2022-09-23 00:16:55 +00:00
testing.rst
torch.ao.ns._numeric_suite.rst
torch.ao.ns._numeric_suite_fx.rst
torch.overrides.rst
torch.rst Remove deprecated torch.lstsq (#70980) 2022-09-23 00:16:55 +00:00
type_info.rst