pytorch/torch
CodemodService FBSourceClangFormatLinterBot ab49d41bb5 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D33393329

fbshipit-source-id: 728d47e62e8d81c5243c62917d88e54c4b4a1db2
2022-01-02 17:30:39 -08:00
..
_C Add test operator in upgrader entry (#69427) 2021-12-15 00:40:05 -08:00
_masked Strided masked var. (#68738) 2021-12-01 19:19:37 -08:00
ao [quant][graphmode][fx] Add qat module mapping support in backend_config_dict (#70287) 2021-12-30 23:30:34 -08:00
autograd Do not use ZeroTensor for inplace ops (#69998) 2021-12-23 15:52:34 -08:00
backends [Linalg] Add a runtime switch to let pytorch prefer a backend impl in linalg functions on GPU (#67980) 2021-12-03 19:06:30 -08:00
contrib
cpu
csrc [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT 2022-01-02 17:30:39 -08:00
cuda Add nvidia-smi memory and utilization as native Python API (#69104) 2021-12-08 10:33:23 -08:00
distributed fix typo in the docs of multiprocessing (#70448) 2021-12-28 09:58:47 -08:00
distributions [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
fft
for_onnx
futures
fx [fx2trt] Add version check for ProfilingVerbosity bulider config (#70286) 2021-12-30 19:59:25 -08:00
jit [Operator Versioning][Edge] Codegen upgrader_mobile.cpp (#69194) 2021-12-16 10:29:35 -08:00
legacy
lib
linalg torch.linalg routines return torch.linalg.LinAlgError when a numerical error in the computation is found. (#68571) 2021-12-23 10:53:26 -08:00
multiprocessing make ProcessException pickleable (#70118) 2021-12-30 09:09:55 -08:00
nn Added antialias flag to interpolate (CPU only, bicubic) (#68819) 2021-12-29 14:04:43 -08:00
onnx [ONNX] Add BFloat16 type support when export to ONNX (#66788) 2021-12-14 12:23:32 -08:00
optim fix typo in adam docs (#70387) 2021-12-28 07:35:39 -08:00
package
profiler Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959) 2021-12-29 20:33:32 -08:00
sparse
special
testing add BFloat16 support for AdaptiveAvgPool2d on CPU (#56902) 2021-12-30 11:58:37 -08:00
utils [DataPipe] removing unbatch_level from .groupby (#70249) 2021-12-22 07:13:12 -08:00
__config__.py
__future__.py
__init__.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions" 2021-12-27 09:11:46 -08:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py [quant] Remove warning for quantized Tensor in __dir__ (#69265) 2021-12-02 10:30:36 -08:00
_tensor_docs.py Porting index_add to structured kernels, add an out variant (#65993) 2021-12-14 11:57:13 -08:00
_tensor_str.py added set_printoptions examples (#68324) 2021-12-14 07:40:52 -08:00
_torch_docs.py Porting index_add to structured kernels, add an out variant (#65993) 2021-12-14 11:57:13 -08:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
autocast_mode.py
CMakeLists.txt Codegen: Generate seperate headers per operator (#68247) 2021-12-14 06:40:08 -08:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py
library.h
overrides.py Remove backward ops for mkldnn convolution (#70467) 2021-12-30 14:29:22 -08:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
script.h
serialization.py
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.