pytorch/torch
Nikita Shulga 96e3b3ac72 [BE] Cleanup CMake flag suppressions (#97584)
Use `append_cxx_flag_if_supported` to determine whether or not `-Werror` is supported
Do not suppress deprecation warnings if glog is not used/installed, as the way check is written right now, it will suppress deprecations even if `glog` is not installed.
Similarly, do not suppress deprecations on MacOS simply because we are compiling with protobuf.
Fix deprecation warnings in:
 - MPS by replacing `MTLResourceOptionCPUCacheModeDefault`->`MTLResourceCPUCacheModeDefaultCache`
 - In GTests by replacing `TYPED_TEST_CASE`->`TYPED_TEST_SUITE`
 - In `codegen/onednn/interface.cpp`, by using passing `Stack` by reference rathern than pointer.

Do not guard calls to `append_cxx_flag_if_supported` with `if(CLANG)` or `if(GCC)`.
Fix some deprecated calls in `Metal` hide more complex exception under `C10_CLANG_DIAGNOSTIC_IGNORE`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97584
Approved by: https://github.com/kit1980
2023-03-27 18:46:09 +00:00
..
_awaits
_C Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00
_C_flatbuffer
_decomp Improve size mismatch error messaging referencing mat/vet sizes (#96863) 2023-03-17 21:07:48 +00:00
_dispatch
_dynamo Dynamo size dim kwargs (#97450) 2023-03-27 15:36:46 +00:00
_export [aot autograd] merge all outputs of funtionalization analysis into single metadata (#95991) 2023-03-08 16:22:54 +00:00
_functorch Add missing aot_autograd_arg_pos_to_source (#97487) 2023-03-24 05:17:59 +00:00
_inductor Resubmit _int_mm (#96685) 2023-03-27 16:14:07 +00:00
_lazy
_logging Improve TORCH_LOGS settings error msg (#97264) 2023-03-22 13:26:53 +00:00
_prims [prims] Fix schema of minimum_value for a primitive operation (#97327) 2023-03-22 20:01:33 +00:00
_prims_common [inductor] support non-tensor ops with dynamic shapes (#97519) 2023-03-26 00:38:50 +00:00
_refs Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00
_subclasses Don't run fallback if symbolic sizes in fake tensor (#97148) 2023-03-21 02:23:44 +00:00
amp Error only if autocast actually enabled (#96097) 2023-03-21 03:13:13 +00:00
ao Init quantization backend config for inductor (#96476) 2023-03-22 07:56:56 +00:00
autograd Remove unnecessary retain_grad call from gradcheck (#96923) 2023-03-27 13:38:28 +00:00
backends DOC: Various typo fixes (#97095) 2023-03-20 20:46:04 +00:00
contrib
cpu
csrc [BE] Cleanup CMake flag suppressions (#97584) 2023-03-27 18:46:09 +00:00
cuda GradScaler recomputes optimizer_state["found_inf_per_device"] before optimizer.step (#97415) 2023-03-24 17:36:47 +00:00
distributed Enable full train_step tracing and customizable dist graph expansion (#97416) 2023-03-25 09:24:21 +00:00
distributions Fix gumbel cdf (#91698) 2023-03-07 23:04:47 +00:00
fft
func
futures
fx Rename PyOperator to HigherOrderOperator (#97493) 2023-03-24 05:04:02 +00:00
jit [JIT] Partially support ForwardRef type annotations for NamedTuple attributes (#96933) 2023-03-22 15:20:38 +00:00
legacy
lib
linalg
masked
monitor
mps
multiprocessing Revert "FIX make sure we import the correct object from multiprocessing (#81862)" 2023-03-22 17:22:47 +00:00
nested
nn Enable full train_step tracing and customizable dist graph expansion (#97416) 2023-03-25 09:24:21 +00:00
onnx [ONNX] Support converting fx graph with symbolic shape to ONNX (#96350) 2023-03-24 15:47:55 +00:00
optim Change 1D Tensor of 1 element to 0D Tensor (#96994) 2023-03-21 18:24:19 +00:00
package Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
profiler Fix potential naming clash when writing traces with tensorboard_trace_handler (#97392) 2023-03-23 16:53:11 +00:00
quantization
signal
sparse bsr_dense_mm Triton kernel: fix out kwarg (#96648) 2023-03-14 18:01:22 +00:00
special
testing Rewrite NCCL watchdog to more reliably throw timeout (#97066) 2023-03-25 04:30:20 +00:00
utils [cpp_extension.py] fix bogus _check_cuda_version (#97602) 2023-03-27 15:15:57 +00:00
__config__.py
__future__.py
__init__.py Default to aot_eager for torch.compile on MPS (#96980) 2023-03-25 14:21:39 +00:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Extend aot autograd dedup guards to params, stop using positions (#96774) 2023-03-21 05:59:33 +00:00
_jit_internal.py [JIT] Partially support ForwardRef type annotations for NamedTuple attributes (#96933) 2023-03-22 15:20:38 +00:00
_linalg_utils.py
_lobpcg.py Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
_lowrank.py
_meta_registrations.py Resubmit _int_mm (#96685) 2023-03-27 16:14:07 +00:00
_namedtensor_internals.py
_ops.py Rename PyOperator to HigherOrderOperator (#97493) 2023-03-24 05:04:02 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor.py fix device type bug for custom device (#97213) 2023-03-27 18:36:47 +00:00
_tensor_docs.py Add as_strided_ to tensor docs (#97300) 2023-03-22 19:08:30 +00:00
_tensor_str.py
_torch_docs.py
_utils.py Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt [BE] Cleanup CMake flag suppressions (#97584) 2023-03-27 18:46:09 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py Require DOCTEST_SHOW environ to run plt.show (#96522) 2023-03-10 21:47:20 +00:00
hub.py
library.h Fix dispatching issue of the new device type. (#97273) 2023-03-21 23:23:06 +00:00
library.py
overrides.py Refactor NT offsets metadata to be a Tensor (#96909) 2023-03-21 18:51:35 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Fix usages of contextmanager without finally (#96170) 2023-03-08 20:59:27 +00:00
storage.py Only warn once for TypedStorage deprecation (#97379) 2023-03-23 05:40:23 +00:00
torch_version.py
types.py Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.