pytorch/docs/source
Edward Z. Yang 5c6f5439b7 Implement SymBool (#92149)
We have known for a while that we should in principle support SymBool as a separate concept from SymInt and SymFloat ( in particular, every distinct numeric type should get its own API). However, recent work with unbacked SymInts in, e.g., https://github.com/pytorch/pytorch/pull/90985 have made this a priority to implement. The essential problem is that our logic for computing the contiguity of tensors performs branches on the passed in input sizes, and this causes us to require guards when constructing tensors from unbacked SymInts. Morally, this should not be a big deal because, we only really care about the regular (non-channels-last) contiguity of the tensor, which should be guaranteed since most people aren't calling `empty_strided` on the tensor, however, because we store a bool (not a SymBool, prior to this PR it doesn't exist) on TensorImpl, we are forced to *immediately* compute these values, even if the value ends up not being used at all. In particular, even when a user allocates a contiguous tensor, we still must compute channels-last contiguity (as some contiguous tensors are also channels-last contiguous, but others are not.)

This PR implements SymBool, and makes TensorImpl use SymBool to store the contiguity information in ExtraMeta. There are a number of knock on effects, which I now discuss below.

* I introduce a new C++ type SymBool, analogous to SymInt and SymFloat. This type supports logical and, logical or and logical negation. I support the bitwise operations on this class (but not the conventional logic operators) to make it clear that logical operations on SymBool are NOT short-circuiting. I also, for now, do NOT support implicit conversion of SymBool to bool (creating a guard in this case). This does matter too much in practice, as in this PR I did not modify the equality operations (e.g., `==` on SymInt) to return SymBool, so all preexisting implicit guards did not need to be changed. I also introduced symbolic comparison functions `sym_eq`, etc. on SymInt to make it possible to create SymBool. The current implementation of comparison functions makes it unfortunately easy to accidentally introduce guards when you do not mean to (as both `s0 == s1` and `s0.sym_eq(s1)` are valid spellings of equality operation); in the short term, I intend to prevent excess guarding in this situation by unit testing; in the long term making the equality operators return SymBool is probably the correct fix.
* ~~I modify TensorImpl to store SymBool for the `is_contiguous` fields and friends on `ExtraMeta`. In practice, this essentially meant reverting most of the changes from https://github.com/pytorch/pytorch/pull/85936 . In particular, the fields on ExtraMeta are no longer strongly typed; at the time I was particularly concerned about the giant lambda I was using as the setter getting a desynchronized argument order, but now that I have individual setters for each field the only "big list" of boolean arguments is in the constructor of ExtraMeta, which seems like an acceptable risk. The semantics of TensorImpl are now that we guard only when you actually attempt to access the contiguity of the tensor via, e.g., `is_contiguous`. By in large, the contiguity calculation in the implementations now needs to be duplicated (as the boolean version can short circuit, but the SymBool version cannot); you should carefully review the duplicate new implementations. I typically use the `identity` template to disambiguate which version of the function I need, and rely on overloading to allow for implementation sharing. The changes to the `compute_` functions are particularly interesting; for most of the functions, I preserved their original non-symbolic implementation, and then introduce a new symbolic implementation that is branch-less (making use of our new SymBool operations). However, `compute_non_overlapping_and_dense` is special, see next bullet.~~ This appears to cause performance problems, so I am leaving this to an update PR.
* (Update: the Python side pieces for this are still in this PR, but they are not wired up until later PRs.) While the contiguity calculations are relatively easy to write in a branch-free way, `compute_non_overlapping_and_dense` is not: it involves a sort on the strides. While in principle we can still make it go through by using a data oblivious sorting network, this seems like too much complication for a field that is likely never used (because typically, it will be obvious that a tensor is non overlapping and dense, because the tensor is contiguous.) So we take a different approach: instead of trying to trace through the logic computation of non-overlapping and dense, we instead introduce a new opaque operator IsNonOverlappingAndDenseIndicator which represents all of the compute that would have been done here. This function returns an integer 0 if `is_non_overlapping_and_dense` would have returned `False`, and an integer 1 otherwise, for technical reasons (Sympy does not easily allow defining custom functions that return booleans). The function itself only knows how to evaluate itself if all of its arguments are integers; otherwise it is left unevaluated. This means we can always guard on it (as `size_hint` will always be able to evaluate through it), but otherwise its insides are left a black box. We typically do NOT expect this custom function to show up in actual boolean expressions, because we will typically shortcut it due to the tensor being contiguous. It's possible we should apply this treatment to all of the other `compute_` operations, more investigation necessary. As a technical note, because this operator takes a pair of a list of SymInts, we need to support converting `ArrayRef<SymNode>` to Python, and I also unpack the pair of lists into a single list because I don't know if Sympy operations can actually validly take lists of Sympy expressions as inputs. See for example `_make_node_sizes_strides`
* On the Python side, we also introduce a SymBool class, and update SymNode to track bool as a valid pytype. There is some subtlety here: bool is a subclass of int, so one has to be careful about `isinstance` checks (in fact, in most cases I replaced `isinstance(x, int)` with `type(x) is int` for expressly this reason.) Additionally, unlike, C++, I do NOT define bitwise inverse on SymBool, because it does not do the correct thing when run on booleans, e.g., `~True` is `-2`. (For that matter, they don't do the right thing in C++ either, but at least in principle the compiler can warn you about it with `-Wbool-operation`, and so the rule is simple in C++; only use logical operations if the types are statically known to be SymBool). Alas, logical negation is not overrideable, so we have to introduce `sym_not` which must be used in place of `not` whenever a SymBool can turn up. To avoid confusion with `__not__` which may imply that `operators.__not__` might be acceptable to use (it isn't), our magic method is called `__sym_not__`. The other bitwise operators `&` and `|` do the right thing with booleans and are acceptable to use.
* There is some annoyance working with booleans in Sympy. Unlike int and float, booleans live in their own algebra and they support less operations than regular numbers. In particular, `sympy.expand` does not work on them. To get around this, I introduce `safe_expand` which only calls expand on operations which are known to be expandable.

TODO: this PR appears to greatly regress performance of symbolic reasoning. In particular, `python test/functorch/test_aotdispatch.py -k max_pool2d` performs really poorly with these changes. Need to investigate.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92149
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-01-21 02:21:56 +00:00
..
_static Move Dynamo docs back to core (#89769) 2022-11-29 04:38:53 +00:00
_templates
community Update Persons of Interest (#90069) 2022-12-02 23:06:57 +00:00
dynamo Fix/modernize dynamo docs (#92572) 2023-01-19 16:15:31 +00:00
elastic
notes [autograd.Function] setup_context always appears on the Function (#92312) 2023-01-18 02:55:42 +00:00
rpc
scripts Doc for Canonical Aten and Prims IR (#90644) 2022-12-13 21:30:47 +00:00
_dynamo.rst Add torch._dynamo to docs (#89510) 2022-11-23 16:33:13 +00:00
amp.rst
autograd.rst Expose autograd.graph.Node as an abstract base class (#91475) 2023-01-18 00:20:13 +00:00
backends.rst [cuBLAS] Add an option to disable reduced precision reductions for BF16 GEMM (#89172) 2022-12-21 18:58:28 +00:00
benchmark_utils.rst
bottleneck.rst
checkpoint.rst
complex_numbers.rst
conf.py Implement SymBool (#92149) 2023-01-21 02:21:56 +00:00
config_mod.rst
cpp_extension.rst
cpp_index.rst
cuda._sanitizer.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
cuda.rst Add Pluggable CUDA allocator backend (#86786) 2022-11-23 17:54:36 +00:00
cudnn_persistent_rnn.rst
cudnn_rnn_determinism.rst
data.rst [DataLoader] Removing DataLoader2 related code (#88848) 2022-11-11 22:27:01 +00:00
ddp_comm_hooks.rst
deploy.rst
distributed.algorithms.join.rst
distributed.checkpoint.rst [PT-D][Checkpointing] Move distributed checkpointing from torch.distributed._shard.checkpoint to torch.distributed.checkpoint (#88698) 2022-11-16 21:06:38 +00:00
distributed.elastic.rst
distributed.optim.rst
distributed.rst [Doc][Distributed] Add missing functions to distributed.rst (#89905) 2022-12-04 07:22:54 +00:00
distributed.tensor.parallel.rst Move tensor_parallel out to distributed.tensor folder (#89878) 2022-11-30 22:13:10 +00:00
distributions.rst
dlpack.rst
docutils.conf
fft.rst
fsdp.rst [FSDP()][3/N] Refactor public APIs (#87917) 2022-10-31 16:45:21 +00:00
func.api.rst [functorch] move batch_norm_replacement to torch.func (#91412) 2023-01-12 19:15:41 +00:00
func.batch_norm.rst [functorch] move batch_norm_replacement to torch.func (#91412) 2023-01-12 19:15:41 +00:00
func.migrating.rst [torch.func] Add migration guide from functorch (#91811) 2023-01-17 22:14:42 +00:00
func.rst [torch.func] Add migration guide from functorch (#91811) 2023-01-17 22:14:42 +00:00
func.ux_limitations.rst [torch.func] Add docs (#91319) 2022-12-30 02:51:18 +00:00
func.whirlwind_tour.rst [torch.func] Add docs (#91319) 2022-12-30 02:51:18 +00:00
futures.rst
fx.rst prepare removal of deprecated functionality in torch.testing (#87969) 2022-11-02 14:04:48 +00:00
hub.rst
index.rst [torch.func] Setup torch.func, populate it with all transforms (#91016) 2022-12-20 00:00:52 +00:00
ir.rst Doc for Canonical Aten and Prims IR (#90644) 2022-12-13 21:30:47 +00:00
jit.rst
jit_builtin_functions.rst
jit_language_reference.rst
jit_language_reference_v2.rst
jit_python_reference.rst
jit_unsupported.rst
jit_utils.rst
library.rst
linalg.rst Add a note on the stability of linalg functions. (#88313) 2022-11-07 22:44:23 +00:00
masked.rst Update masked.rst (#89758) 2022-11-28 17:55:43 +00:00
math-quantizer-equation.png
mobile_optimizer.rst [Reland] Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#92081) 2023-01-14 17:06:00 +00:00
model_zoo.rst
monitor.rst
multiprocessing.rst
name_inference.rst Softmax added to tensor, torch and docs (#91292) 2022-12-28 15:06:24 +00:00
named_tensor.rst
nested.rst Add a NestedTensor Readme (#91472) 2023-01-06 14:44:55 +00:00
nn.functional.rst
nn.init.rst
nn.rst
onnx.rst [ONNX] Documentation for torch.onnx.find_mismatch (#90728) 2023-01-11 23:58:57 +00:00
onnx_diagnostics.rst [ONNX] Document ONNX diagnostics (#88371) 2022-11-16 19:21:46 +00:00
onnx_supported_aten_ops.rst
optim.rst Update optim.rst (#91195) 2022-12-20 23:22:25 +00:00
package.rst
pipeline.rst
profiler.rst
quantization-accuracy-debugging.rst Fix typo under docs directory (#87583) 2022-10-24 23:52:44 +00:00
quantization-backend-configuration.rst update quantization doc: add x86 backend as default backend of server inference (#86794) 2022-12-02 02:10:25 +00:00
quantization-support.rst [Quant][docs] Move parts of BackendConfig tutorial (#91999) 2023-01-13 05:59:22 +00:00
quantization.rst update quantization doc: add x86 backend as default backend of server inference (#86794) 2022-12-02 02:10:25 +00:00
random.rst
rpc.rst
signal.rst Nuttall window (#90103) 2022-12-16 09:05:53 +00:00
sparse.rst Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094) 2023-01-13 14:50:33 +00:00
special.rst
storage.rst Deprecate TypedStorage, its derived classes, and all of their public methods (#85303) 2022-11-08 18:11:01 +00:00
tensor_attributes.rst Reland "Add torch.utils.device_mode" (#91796) 2023-01-09 20:57:12 +00:00
tensor_view.rst
tensorboard.rst
tensors.rst Rename Tensor._storage to Tensor.untyped_storage and update docs (#91414) 2022-12-28 19:21:34 +00:00
testing.rst document torch.testing.assert_allclose (#89526) 2022-12-01 11:22:50 +00:00
torch.ao.ns._numeric_suite.rst
torch.ao.ns._numeric_suite_fx.rst
torch.overrides.rst
torch.rst Implement SymBool (#92149) 2023-01-21 02:21:56 +00:00
type_info.rst