pytorch/torch
2022-07-09 00:54:42 +00:00
..
_C [cuDNN V8 API] (reopen 2) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#78299) 2022-07-07 23:25:23 +00:00
_C_flatbuffer
_decomp Revert "Make kl_div a composite function. (#80334)" 2022-07-06 17:51:06 +00:00
_lazy python bindings for create_metric_report (#79679) 2022-07-06 20:06:17 +00:00
_masked
_prims [primTorch] Elementwise unary ops vi (#79526) 2022-07-08 15:17:45 +00:00
_refs Register unregistered refs and add a test to check registration (#80497) 2022-07-08 16:29:52 +00:00
_subclasses fix overload ambiguity with functional ops; fix _foreach op grouping (#80556) 2022-07-06 12:45:11 +00:00
amp
ao [Quant][fx][bc-breaking] Do not move models to CPU in convert (#80555) 2022-07-08 19:23:57 +00:00
autograd
backends [cuDNN V8 API] (reopen 2) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#78299) 2022-07-07 23:25:23 +00:00
contrib
cpu
csrc Reland: Enable dim=None for torch.sum (#79881) 2022-07-09 00:54:42 +00:00
cuda
distributed Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
distributions
fft
futures Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
fx Prims+NvFuser Backend Prototype (#80591) 2022-07-08 19:53:03 +00:00
jit Reland: Enable dim=None for torch.sum (#79881) 2022-07-09 00:54:42 +00:00
legacy
lib
linalg Revert "[Array API] Add linalg.vecdot (#70542)" 2022-07-08 22:56:51 +00:00
monitor
multiprocessing
nested
nn Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
onnx [ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793) 2022-07-08 23:14:01 +00:00
optim Revert "Adding maximize to ASGD (#80323)" 2022-07-08 13:35:31 +00:00
package Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
profiler [Profiler] Add Pattern that detects extra cuda copy (#80572) 2022-07-07 20:22:42 +00:00
quantization
sparse Add spdiags sparse matrix initialization (#78439) 2022-07-01 01:11:54 +00:00
special
testing Reland: Enable dim=None for torch.sum (#79881) 2022-07-09 00:54:42 +00:00
utils [DataLoader] Locking lower ranks seed recepients (#81071) 2022-07-08 18:53:45 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Add support for multiple inputs to out_wrapper and strict dtype checking (#80601) 2022-07-05 12:31:21 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
_tensor_docs.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
_tensor_str.py
_torch_docs.py Reland: Enable dim=None for torch.sum (#79881) 2022-07-09 00:54:42 +00:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Revert "[Profiler] Include ActivityType from Kineto (#80750)" 2022-07-08 05:16:56 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
hub.py
library.h
library.py Add doc string for Library.impl (#81047) 2022-07-08 18:18:14 +00:00
overrides.py Revert "[Array API] Add linalg.vecdot (#70542)" 2022-07-08 22:56:51 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py Fix Module.share_memory error (#80843) 2022-07-05 15:17:36 +00:00
torch_version.py Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.