pytorch/torch
Scott Wolchok b9d516138b [PyTorch] Add test_modules test for TransformerEncoderLayer fast path (#78268)
Extend the existing TransformerEncoderLayer test to cover the fast path.

Differential Revision: [D36564009](https://our.internmc.facebook.com/intern/diff/D36564009/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78268
Approved by: https://github.com/zrphercule
2022-06-28 21:07:54 +00:00
..
_C add heuristic and idle time computation 2022-06-21 21:02:22 +00:00
_C_flatbuffer
_decomp Revert "Revert "formatted _decomp folder with black"" 2022-06-22 20:47:52 +00:00
_lazy
_masked
_prims Add Div reference (#77936) 2022-06-27 14:46:17 +00:00
_refs Add torch.nn.functional.threshold ref (#79808) 2022-06-27 20:30:42 +00:00
_subclasses [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193) 2022-06-24 20:00:07 +00:00
amp Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376) 2022-06-27 21:36:27 +00:00
ao [quant] Implement APoT fake quantization (#79845) 2022-06-28 18:15:26 +00:00
autograd
backends
contrib
cpu
csrc ProcessGroupWrapper log full rank fingerprint mismatches (#79901) 2022-06-28 18:30:38 +00:00
cuda
distributed Corrected comments in fsdp (#80456) 2022-06-28 18:46:05 +00:00
distributions More stable computation of KL between two Bernoulli distributions (#79944) 2022-06-27 21:31:45 +00:00
fft
futures
fx Made Proxy Tensor Mode also trace overloads (#80403) 2022-06-28 04:31:43 +00:00
jit Use generators with all/any in torch/optim (#78142) 2022-06-24 17:23:45 +00:00
legacy
lib
linalg
monitor
multiprocessing
nested
nn Bugfix/weakref (#80139) 2022-06-28 14:51:42 +00:00
onnx [ONNX] Fix hardshrink and softshrink output's shape (#79695) 2022-06-28 20:00:10 +00:00
optim Add __all__ for torch.optim and torch.nn.modules modules (#80237) 2022-06-24 21:34:10 +00:00
package Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367) 2022-06-27 21:27:30 +00:00
profiler Add __all__ for torch.nn.modules, torch.distributed.elastic, torch.nn.utils submodules (#80240) 2022-06-27 17:11:12 +00:00
quantization
sparse
special Revert "torch.special.gamma (#78904)" 2022-06-28 00:54:22 +00:00
testing [PyTorch] Add test_modules test for TransformerEncoderLayer fast path (#78268) 2022-06-28 21:07:54 +00:00
utils [DataPipe] Count number of successful yields for IterDataPipe (#79657) 2022-06-28 17:30:33 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Ensure torch._refs registrations also get triggered on import torch (#80270) 2022-06-26 02:23:03 +00:00
_namedtensor_internals.py
_ops.py fix submodule imports by importing functions directly (#79368) 2022-06-22 08:01:23 +00:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py
_tensor_docs.py Fix Tensor.scatter_add_ doc (#80223) 2022-06-27 19:57:53 +00:00
_tensor_str.py
_torch_docs.py
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py
library.h
library.py error when registering meta kernels to composite ops in core 2022-06-21 02:17:13 +00:00
overrides.py Revert "torch.special.gamma (#78904)" 2022-06-28 00:54:22 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Fix code that triggers BytesWarning (#79868) 2022-06-21 01:12:21 +00:00
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.