pytorch/torch
Elias Ellison 40d70ba7ed Remove a number of fixed skips (#103162)
Also adds `PYTORCH_TEST_WITH_AOT_EAGER` to distinguish errors coming from aot_autograd and not inductor (not tested in ci, but useful for local debugging)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103162
Approved by: https://github.com/desertfire
2023-06-08 17:37:59 +00:00
..
_awaits
_C trigger tracing for MTIA events (#102288) 2023-06-05 15:10:31 +00:00
_C_flatbuffer
_custom_op Add API to construct the functional variant of an op (#102293) 2023-06-02 13:36:50 +00:00
_decomp Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_dispatch Switch most Python RAII guard usages to context manager (#102642) 2023-06-01 16:28:37 +00:00
_dynamo [dynamo] fix torch.distributions lazy_attribute failure (#103208) 2023-06-08 17:26:54 +00:00
_export [export] Initial deserialization v2 (#102716) 2023-06-07 16:02:35 +00:00
_functorch Add a little more error checking to minifier (#103057) 2023-06-07 14:40:12 +00:00
_higher_order_ops Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108) 2023-06-08 01:55:27 +00:00
_inductor [inductor][easy] raise register spill threshold (#103190) 2023-06-08 00:35:46 +00:00
_lazy
_logging perf hint logging in inductor (#102250) 2023-05-27 03:43:30 +00:00
_prims Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_prims_common Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_refs [Inductor] Fix x.view(dtype) decomp and make inductor support it (#102920) 2023-06-07 17:10:54 +00:00
_subclasses [ROCm] force HIP context initialization for inductor UTs (#103149) 2023-06-07 21:42:33 +00:00
amp change error_message for XPU Autocast data type check (#102073) 2023-05-24 08:36:43 +00:00
ao [PT2][Quant] In linear partition include functional.linear (#103186) 2023-06-08 09:48:09 +00:00
autograd trigger tracing for MTIA events (#102288) 2023-06-05 15:10:31 +00:00
backends [Typing] Export torch.backends as subpackage (#102099) 2023-05-24 07:03:17 +00:00
contrib
cpu
csrc [data_loader] Extra signal handlers in DataLoader.cpp should be added on top rather than replacing defaults (#103164) 2023-06-08 02:55:58 +00:00
cuda nn.Linear with BSR inputs: spare the user from explicit Triton kernel registrations (#98403) 2023-05-31 13:09:45 +00:00
distributed [reland][DTensor][3/N] add DTensor constructor function: full (#101436) (#103165) 2023-06-08 16:18:33 +00:00
distributions Enable mypy allow redefinition (#102046) 2023-05-24 07:05:30 +00:00
fft
func
futures
fx [export] Initial deserialization v2 (#102716) 2023-06-07 16:02:35 +00:00
jit Create public interface for torch.jit (#101678) 2023-06-05 13:14:32 +00:00
legacy
lib
linalg
masked
monitor
mps
multiprocessing
nested
nn Add parametrization version of weight_norm (#103001) 2023-06-06 13:14:43 +00:00
onnx [ONNX] Add FX exporter MaxPool tests (#102773) 2023-06-06 23:31:49 +00:00
optim add foreach support for custom device (#102047) 2023-06-07 13:59:20 +00:00
package Integrating new API usage metadata logger (#101762) 2023-05-26 00:24:26 +00:00
profiler [Profiler] Include more uncategorized events in memory profile (#101200) 2023-06-08 16:22:49 +00:00
quantization
signal
sparse sampled_addmm: BSR support (#101163) 2023-05-25 12:33:50 +00:00
special
testing Remove a number of fixed skips (#103162) 2023-06-08 17:37:59 +00:00
utils [MPS] Prerequisite for MPS C++ extension (#102483) 2023-06-07 17:28:31 +00:00
__config__.py
__future__.py
__init__.py Fix regressions caused by https://github.com/pytorch/pytorch/pull/103128 2023-06-07 09:39:02 -07:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Pretty dataclass dynamo explain (#102869) 2023-06-07 22:38:57 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [Inductor] Fix x.view(dtype) decomp and make inductor support it (#102920) 2023-06-07 17:10:54 +00:00
_namedtensor_internals.py
_ops.py Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108) 2023-06-08 01:55:27 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor.py This extra message would have helped with Wav2Vec2 debugging. (#103002) 2023-06-06 04:28:16 +00:00
_tensor_docs.py
_tensor_str.py Add torch._utils.render_call, improve printoptions (#102623) 2023-05-31 22:08:04 +00:00
_torch_docs.py
_utils.py Preserve coalesce state in sparse COO tensor serialization (#102647) 2023-06-03 01:37:52 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
extension.h
functional.py [BE] Do not expose torch.functional.opt_einsum (#102004) 2023-05-23 01:52:40 +00:00
hub.py
library.h [PyTorch] Delete c10::guts::if_constexpr (#101991) 2023-05-23 23:19:35 +00:00
library.py [torch.library] Change Library.__del__ into weakref.finalize (#101829) 2023-05-22 19:51:08 +00:00
overrides.py Switch most Python RAII guard usages to context manager (#102642) 2023-06-01 16:28:37 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Integrating new API usage metadata logger (#101762) 2023-05-26 00:24:26 +00:00
storage.py add storage dtype for custom device (#102481) 2023-06-01 12:46:19 +00:00
torch_version.py
types.py
version.py.tpl [bazel] add build for functorch (#101475) 2023-05-18 20:29:08 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.