pytorch/torch
Sherlock Huang ac5a94789f Refactor lift_subgraph_as_module as a fx.passes.util function (#80292)
lift_subgraph_as_module can be shared between fuser_utils.py and spliter_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80292
Approved by: https://github.com/jjsjann123, https://github.com/842974287
2022-06-29 22:35:39 +00:00
..
_C [torch] Add more functions to __init__.pyi.in for torch._C for Node and Value (#79654) 2022-06-28 23:57:09 +00:00
_C_flatbuffer
_decomp Revert "Revert "formatted _decomp folder with black"" 2022-06-22 20:47:52 +00:00
_lazy
_masked masked logasumexp/logaddexp 2022-06-11 05:46:36 +00:00
_prims [primTorch] support one tensor and two scalars in _prims.where (#80146) 2022-06-29 19:58:31 +00:00
_refs [primtorch] add reference for clamp_min/clamp_max (#79821) 2022-06-29 14:12:23 +00:00
_subclasses [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193) 2022-06-24 20:00:07 +00:00
amp Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376) 2022-06-27 21:36:27 +00:00
ao [quant] Implement APoT fake quantization (#79845) 2022-06-28 18:15:26 +00:00
autograd use is_same_size in autograd init (#79553) 2022-06-15 19:49:42 +00:00
backends Deprecate torch.lu 2022-06-07 22:50:14 +00:00
contrib
cpu
csrc Add ComplexDouble scalar creation bindings to nvFuser's Python API (#80522) 2022-06-29 21:12:13 +00:00
cuda
distributed Revert "Add __all__ for torch.distributed and fx modules (#80460)" 2022-06-29 16:20:55 +00:00
distributions More stable computation of KL between two Bernoulli distributions (#79944) 2022-06-27 21:31:45 +00:00
fft
futures
fx Refactor lift_subgraph_as_module as a fx.passes.util function (#80292) 2022-06-29 22:35:39 +00:00
jit Use generators with all/any in torch/optim (#78142) 2022-06-24 17:23:45 +00:00
legacy
lib turn on -Werror=unused-variable in our Bazel CPU build 2022-06-11 02:46:34 +00:00
linalg Simplify and optimize linalg.solve 2022-06-11 04:06:40 +00:00
monitor
multiprocessing
nested
nn Bugfix/weakref (#80139) 2022-06-28 14:51:42 +00:00
onnx [ONNX] Fix potentially unbound variables (#79789) 2022-06-29 17:01:49 +00:00
optim Don't error if _warned_capturable_if_run_uncaptured not set (#80345) 2022-06-29 03:46:22 +00:00
package Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367) 2022-06-27 21:27:30 +00:00
profiler Add __all__ for torch.nn.modules, torch.distributed.elastic, torch.nn.utils submodules (#80240) 2022-06-27 17:11:12 +00:00
quantization fx quant: refactor qconfig setting out of find_matches 2022-06-17 18:52:00 +00:00
sparse
special torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
testing Add ComplexDouble scalar creation bindings to nvFuser's Python API (#80522) 2022-06-29 21:12:13 +00:00
utils [DataLoader] Close open in DataPipe streams on best effort basis (#78952) 2022-06-29 20:11:23 +00:00
__config__.py
__future__.py
__init__.py Updates TF32 docs (#79401) 2022-06-13 21:02:00 +00:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771) 2022-06-07 21:44:55 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Ensure torch._refs registrations also get triggered on import torch (#80270) 2022-06-26 02:23:03 +00:00
_namedtensor_internals.py
_ops.py fix submodule imports by importing functions directly (#79368) 2022-06-22 08:01:23 +00:00
_python_dispatcher.py
_six.py
_sources.py Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771) 2022-06-07 21:44:55 +00:00
_storage_docs.py
_tensor.py Add option for allowing non-fake inputs, add deepcopy impl 2022-06-17 19:36:26 +00:00
_tensor_docs.py Fix Tensor.scatter_add_ doc (#80223) 2022-06-27 19:57:53 +00:00
_tensor_str.py Move IPU tensors to the CPU for printing. (#79287) 2022-06-20 16:49:51 +00:00
_torch_docs.py MAINT: Harmonize argsort params with array_api (#75162) 2022-06-09 12:32:01 +00:00
_utils.py hook XPU device in _get_available_device_type (#76167) 2022-06-14 04:34:21 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Make Wunused-local-typedef a hard error (#77918) 2022-06-09 18:14:01 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py Deprecate torch.lu 2022-06-07 22:50:14 +00:00
hub.py
library.h Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
library.py error when registering meta kernels to composite ops in core 2022-06-21 02:17:13 +00:00
overrides.py torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py Simplify and optimize linalg.solve 2022-06-11 04:06:40 +00:00
script.h
serialization.py Fix code that triggers BytesWarning (#79868) 2022-06-21 01:12:21 +00:00
storage.py Add full support for serialization of MPS Tensors (#79465) 2022-06-14 17:54:30 +00:00
torch_version.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.