pytorch/torch
Nikita Shulga 40e2aadf47 Create __init__.py (#78629)
To make `torch.utils.jit` a proper package, otherwise it will not be added to the wheel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78629
Approved by: https://github.com/seemethere, https://github.com/xuzhao9, https://github.com/davidberard98
2022-06-03 18:14:21 +00:00
..
_C Add Caching of Conversion to Fake/Meta tensors in FakeTensorMode 2022-06-03 13:56:00 +00:00
_C_flatbuffer
_decomp Ported t decomp to become a ref (#78686) 2022-06-03 01:16:20 +00:00
_lazy
_masked
_prims Cleanup impl_nvfuser for unary ops (#78670) 2022-06-02 16:17:47 +00:00
_refs Ported t decomp to become a ref (#78686) 2022-06-03 01:16:20 +00:00
_subclasses Add Caching of Conversion to Fake/Meta tensors in FakeTensorMode 2022-06-03 13:56:00 +00:00
amp
ao [quant] follow up fixes for prepare_fx/prepare_qat_fx calls in classyvision (#105) (#78660) 2022-06-03 01:08:45 +00:00
autograd [forward ad] forbid non-float non-complex tangent and primal 2022-05-31 20:58:19 +00:00
backends [coreml] Introducing Quantization (#78108) 2022-06-01 17:10:17 +00:00
contrib
cpu
csrc Disable TracerWarnings on NNC opinfo tests 2022-06-03 18:11:12 +00:00
cuda Resolve TODO after Python 2 for custom_fwd (#78592) 2022-06-01 05:17:41 +00:00
distributed [FSDP] Allow different optim_input orders across ranks 2022-06-03 11:47:24 +00:00
distributions
fft
futures
fx
jit
legacy
lib
linalg
monitor
multiprocessing
nested
nn Avoid CPU Sync in SyncBatchNorm When Capturing CUDA Graphs 2022-06-03 04:32:57 +00:00
onnx [ONNX] Variable length argument support for quantized_args (#78775) 2022-06-03 01:31:19 +00:00
optim
package
profiler Add __all__ definition in torch.profiler to fix Pylance type check er… (#78553) 2022-06-02 16:48:36 +00:00
quantization
sparse
special Bessel functions (#78451) 2022-06-02 14:06:20 +00:00
testing Disable TracerWarnings on NNC opinfo tests 2022-06-03 18:11:12 +00:00
utils Create __init__.py (#78629) 2022-06-03 18:14:21 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py repeat_interleaves meta function 2022-06-02 21:24:46 +00:00
_namedtensor_internals.py
_ops.py Revert "Autogen Tags enum, and allow specifying tags while defining an op" 2022-06-03 01:53:53 +00:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Support saving Bfloat16 tensors for XLA/HPU (#77534) 2022-06-01 14:19:09 +00:00
_tensor_docs.py to_padded_tensor doc v0 (#78657) 2022-06-03 14:27:31 +00:00
_tensor_str.py
_torch_docs.py Updating torch.log example 2022-06-03 00:57:35 +00:00
_utils.py [DOCS] Add docstring to _get_async_or_non_blocking in _utils.py (#78036) 2022-06-01 16:19:43 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py
library.h Revert "Autogen Tags enum, and allow specifying tags while defining an op" 2022-06-03 01:53:53 +00:00
library.py Add a check to ensure input func to Library.impl is callable 2022-06-02 16:55:39 +00:00
overrides.py Bessel functions (#78451) 2022-06-02 14:06:20 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py Fix _free_weak_ref error (#78575) 2022-06-01 00:07:48 +00:00
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.