pytorch/torch
Davide Italiano 8a2000fd42 [MPS] Implement support for zeta (both eager and inductor). (#146465)
A test was failing in inductor (`test_pointwise_zeta`) -- and I realized the operation was missing also from eager.
Implemented for both, leveraging the kernel. Happy to split in two (one PR for eager, one for inductor) if folks prefer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146465
Approved by: https://github.com/malfet
2025-02-05 13:55:50 +00:00
..
_awaits
_C update _unsafe_set_version_counter to accept lists of tensors (#137921) 2025-02-04 04:51:11 +00:00
_C_flatbuffer
_custom_op
_decomp
_dispatch
_dynamo [ca] no longer require is_traceable annotations for c++ autograd functions (#146229) 2025-02-05 08:49:17 +00:00
_export Revert "move and fix logic to update unbacked bindings (#146115)" 2025-02-05 04:51:39 +00:00
_functorch [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_higher_order_ops Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
_inductor [MPS] Implement support for zeta (both eager and inductor). (#146465) 2025-02-05 13:55:50 +00:00
_lazy
_library [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_logging use DTRACE_ENV_VAR as the trace logs directory of set (#146412) 2025-02-04 20:54:28 +00:00
_numpy
_prims
_prims_common [dynamo] Disable compiling on elementwise_type_promotion_wrapper (#146219) 2025-02-03 18:02:48 +00:00
_refs fix incorrect literal strings / accidental tuples (#146037) 2025-02-03 15:08:11 +00:00
_strobelight
_subclasses Fix aten.to when input is a tensor constant (#146220) 2025-02-01 11:07:33 +00:00
_vendor
accelerator
amp [autocast][pytorch] Support autocast for MTIA (#145627) 2025-01-25 03:24:59 +00:00
ao [BE]: Enable ruff SLOT checks (#146276) 2025-02-04 19:18:23 +00:00
autograd update _unsafe_set_version_counter to accept lists of tensors (#137921) 2025-02-04 04:51:11 +00:00
backends Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-31 17:43:09 +00:00
compiler [Doc] Add period at the end of the sentence (#145384) 2025-01-22 19:56:31 +00:00
contrib
cpu [CPUInductor] Fix SVE256 detection (#146207) 2025-02-01 18:51:34 +00:00
csrc [ca] no longer require is_traceable annotations for c++ autograd functions (#146229) 2025-02-05 08:49:17 +00:00
cuda [inductor triton] Disable incorrect TF32 usage on CUDA capability < 8 (#145684) 2025-01-28 22:01:08 +00:00
distributed [BE]: Enable ruff SLOT checks (#146276) 2025-02-04 19:18:23 +00:00
distributions torch.distributions: replace numbers.Number with torch.types.Number. (#145086) 2025-01-27 20:24:55 +00:00
export [export] Fix draft-export logging (#146106) 2025-02-05 05:49:22 +00:00
fft
func
futures
fx add support for capturing provenance of unary operations (#146413) 2025-02-05 08:31:38 +00:00
jit
legacy
lib
linalg
masked
monitor add WaitCounter type interface and get rid of type errors (#146175) 2025-02-01 23:24:52 +00:00
mps
mtia [S481486] Move MTIA dynamic library loading from __init__.py to a separate module (#145322) 2025-01-22 23:39:43 +00:00
multiprocessing
nested Support remaining *_like factory functions for NJT (#144889) 2025-01-27 21:33:51 +00:00
nn [BE]: Enable ruff SLOT checks (#146276) 2025-02-04 19:18:23 +00:00
onnx [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
optim [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
package [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
profiler execution trace export supports gzip format (#146179) 2025-02-01 01:25:25 +00:00
quantization
signal
sparse
special
testing Make regex error catching compatible with Python 3.12+. (#145945) 2025-02-05 00:57:36 +00:00
utils [inductor] Refactor op handlers part 5 (#146257) 2025-02-04 23:36:25 +00:00
xpu
__config__.py
__future__.py
__init__.py Torch device backend autoload fix (#145611) 2025-01-31 19:27:42 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py
_guards.py
_jit_internal.py PEP585: Missed conversions (#145342) 2025-01-29 05:24:36 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py nonzero_static with symint size (#146006) 2025-01-30 23:42:42 +00:00
_namedtensor_internals.py
_ops.py [Dynamo][Trace PyDispatcher] Remove disable from HigherOrderOperator.__call__ (#146270) 2025-02-03 21:47:54 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py [pytorch] raise exception when calling dim order on sparse tensor (#145888) 2025-01-29 06:15:44 +00:00
_tensor_docs.py
_tensor_str.py [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_thread_safe_fork.py
_torch_docs.py Add overloads to diagonal docs (#144214) 2025-01-31 15:53:59 +00:00
_utils.py [BE]: Enable ruff SLOT checks (#146276) 2025-02-04 19:18:23 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
extension.h
functional.py Revert "Advance past fc window for stft center (#145437)" 2025-01-30 23:14:16 +00:00
hub.py
library.h Remove trivial dispatch_key_allowlist_check function (#146169) 2025-01-31 19:59:40 +00:00
library.py [Custom Ops] Fix f-strings in custom ops error message (#145673) 2025-01-27 19:22:43 +00:00
overrides.py Revert "Add generator parameter to rand*_like functions (#136780)" 2025-01-24 19:00:21 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Add option to serialization config to reduce random reads from get_record_offset when loading with mmap=True (#143880) 2025-01-31 17:09:20 +00:00
storage.py
torch_version.py [BE]: Enable ruff SLOT checks (#146276) 2025-02-04 19:18:23 +00:00
types.py Improve typing in torch/types.py (#145237) 2025-01-28 05:29:12 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.