pytorch/torch
2022-06-24 21:34:10 +00:00
..
_C add heuristic and idle time computation 2022-06-21 21:02:22 +00:00
_C_flatbuffer
_decomp Revert "Revert "formatted _decomp folder with black"" 2022-06-22 20:47:52 +00:00
_lazy
_masked
_prims [primTorch] asinh, atanh (#80210) 2022-06-24 16:43:35 +00:00
_refs [primTorch] asinh, atanh (#80210) 2022-06-24 16:43:35 +00:00
_subclasses [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193) 2022-06-24 20:00:07 +00:00
amp
ao [ao][sparsity] Implemented state_dict() and load_state_dict() functions (#79883) 2022-06-24 16:55:06 +00:00
autograd
backends
contrib
cpu
csrc [c10d] Make scatter as a custom op (#79688) 2022-06-24 21:22:58 +00:00
cuda
distributed [FSDP] Fix param name prefixes for ignored modules (#79955) 2022-06-21 22:10:33 +00:00
distributions
fft
futures
fx Add gradual typing constraint definitions (#79912) 2022-06-24 20:50:21 +00:00
jit Use generators with all/any in torch/optim (#78142) 2022-06-24 17:23:45 +00:00
legacy
lib
linalg
monitor
multiprocessing
nested
nn Add __all__ for torch.optim and torch.nn.modules modules (#80237) 2022-06-24 21:34:10 +00:00
onnx [ONNX Export] Use half_pixel instead of pytorch_half_pixel. (#80003) 2022-06-24 18:25:29 +00:00
optim Add __all__ for torch.optim and torch.nn.modules modules (#80237) 2022-06-24 21:34:10 +00:00
package
profiler [Profiler] Add queue depth computation (#79993) 2022-06-24 20:47:58 +00:00
quantization fx quant: refactor qconfig setting out of find_matches 2022-06-17 18:52:00 +00:00
sparse
special torch.special.scaled_modified_bessel_k1 (#78901) 2022-06-24 20:57:38 +00:00
testing torch.special.scaled_modified_bessel_k1 (#78901) 2022-06-24 20:57:38 +00:00
utils Revert "Add validation for mapper function in datapipes with input_col (#79344)" 2022-06-24 17:17:33 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Port cholesky to structured kernels (#79300) 2022-06-24 02:37:45 +00:00
_namedtensor_internals.py
_ops.py fix submodule imports by importing functions directly (#79368) 2022-06-22 08:01:23 +00:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Add option for allowing non-fake inputs, add deepcopy impl 2022-06-17 19:36:26 +00:00
_tensor_docs.py
_tensor_str.py Move IPU tensors to the CPU for printing. (#79287) 2022-06-20 16:49:51 +00:00
_torch_docs.py
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py
library.h
library.py error when registering meta kernels to composite ops in core 2022-06-21 02:17:13 +00:00
overrides.py torch.special.scaled_modified_bessel_k1 (#78901) 2022-06-24 20:57:38 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Fix code that triggers BytesWarning (#79868) 2022-06-21 01:12:21 +00:00
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.