mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Currently whenever the sizes or strides are modified for a `TensorImpl` we eagerly recompute the numel and memory format flags. This is fine for static shapes as it's all fast C++ code, but for symbolic shapes it runs slow python code. This instead changes the `SymbolicShapeMeta` object to compute the derived quantities lazily at the first request. This has the added benefit that we can now pass assumptions in `empty_tensor_restride` which remove the need to compute some contiguity flags at all. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112785 Approved by: https://github.com/ezyang ghstack dependencies: #112689, #112890 |
||
|---|---|---|
| .. | ||
| benchmark | ||
| core | ||
| cuda | ||
| hip | ||
| macros | ||
| mobile | ||
| test | ||
| util | ||
| BUCK.oss | ||
| BUILD.bazel | ||
| build.bzl | ||
| CMakeLists.txt | ||
| ovrsource_defs.bzl | ||