pytorch/torch
Aaron Gokaslan 292af3cc89 [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408)
Apply ruff rule about implicit string concatenation, this autofixes strings that are all the same type and on the same line. These lines are broken up likely as the result of autoformatters in the past. All fixes are automated using the autofixes in ISC001.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146408
Approved by: https://github.com/justinchuby, https://github.com/janeyx99
2025-02-04 19:07:04 +00:00
..
_awaits
_C update _unsafe_set_version_counter to accept lists of tensors (#137921) 2025-02-04 04:51:11 +00:00
_C_flatbuffer
_custom_op PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_decomp PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_dispatch PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_dynamo [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_export [export] Fix requires_grad deserialization (#146351) 2025-02-04 08:02:38 +00:00
_functorch [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_higher_order_ops Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
_inductor [hop][inductor] track the dependency on unbacked symbols correctly with constant_args for hops (#143456) 2025-02-04 18:47:34 +00:00
_lazy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_library [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_logging Integrate sympy expression provenance logging with structured logs (#145848) 2025-02-04 01:21:37 +00:00
_numpy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims_common [dynamo] Disable compiling on elementwise_type_promotion_wrapper (#146219) 2025-02-03 18:02:48 +00:00
_refs fix incorrect literal strings / accidental tuples (#146037) 2025-02-03 15:08:11 +00:00
_strobelight PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_subclasses Fix aten.to when input is a tensor constant (#146220) 2025-02-01 11:07:33 +00:00
_vendor
accelerator
amp [autocast][pytorch] Support autocast for MTIA (#145627) 2025-01-25 03:24:59 +00:00
ao Resolve affine quantization namespace collision with torchao (#145941) 2025-01-31 21:29:47 +00:00
autograd update _unsafe_set_version_counter to accept lists of tensors (#137921) 2025-02-04 04:51:11 +00:00
backends Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-31 17:43:09 +00:00
compiler [Doc] Add period at the end of the sentence (#145384) 2025-01-22 19:56:31 +00:00
contrib PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
cpu [CPUInductor] Fix SVE256 detection (#146207) 2025-02-01 18:51:34 +00:00
csrc Revert "[aoti] Assign proxy call args by name, and support default values. (#146263)" 2025-02-04 12:57:55 +00:00
cuda [inductor triton] Disable incorrect TF32 usage on CUDA capability < 8 (#145684) 2025-01-28 22:01:08 +00:00
distributed [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
distributions torch.distributions: replace numbers.Number with torch.types.Number. (#145086) 2025-01-27 20:24:55 +00:00
export [export] Additionally save pytree namedtuple field names (#145956) 2025-02-04 04:42:30 +00:00
fft
func
futures PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
fx [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
jit PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
legacy
lib
linalg
masked PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
monitor add WaitCounter type interface and get rid of type errors (#146175) 2025-02-01 23:24:52 +00:00
mps [MPS] Support includes in metal objects (#145087) 2025-01-18 05:35:22 +00:00
mtia [S481486] Move MTIA dynamic library loading from __init__.py to a separate module (#145322) 2025-01-22 23:39:43 +00:00
multiprocessing
nested Support remaining *_like factory functions for NJT (#144889) 2025-01-27 21:33:51 +00:00
nn [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
onnx [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
optim [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
package [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
profiler execution trace export supports gzip format (#146179) 2025-02-01 01:25:25 +00:00
quantization
signal PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
sparse PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
special
testing Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
utils [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
xpu PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
__config__.py
__future__.py
__init__.py Torch device backend autoload fix (#145611) 2025-01-31 19:27:42 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py
_guards.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_jit_internal.py PEP585: Missed conversions (#145342) 2025-01-29 05:24:36 +00:00
_linalg_utils.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lobpcg.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lowrank.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_meta_registrations.py nonzero_static with symint size (#146006) 2025-01-30 23:42:42 +00:00
_namedtensor_internals.py
_ops.py [Dynamo][Trace PyDispatcher] Remove disable from HigherOrderOperator.__call__ (#146270) 2025-02-03 21:47:54 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_storage_docs.py
_streambase.py
_tensor.py [pytorch] raise exception when calling dim order on sparse tensor (#145888) 2025-01-29 06:15:44 +00:00
_tensor_docs.py
_tensor_str.py [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408) 2025-02-04 19:07:04 +00:00
_thread_safe_fork.py
_torch_docs.py Add overloads to diagonal docs (#144214) 2025-01-31 15:53:59 +00:00
_utils.py [utils] add try_import method for importing optional modules (#145528) 2025-01-25 00:14:07 +00:00
_utils_internal.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_VF.py
_vmap_internals.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_weights_only_unpickler.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
extension.h
functional.py Revert "Advance past fc window for stft center (#145437)" 2025-01-30 23:14:16 +00:00
hub.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
library.h Remove trivial dispatch_key_allowlist_check function (#146169) 2025-01-31 19:59:40 +00:00
library.py [Custom Ops] Fix f-strings in custom ops error message (#145673) 2025-01-27 19:22:43 +00:00
overrides.py Revert "Add generator parameter to rand*_like functions (#136780)" 2025-01-24 19:00:21 +00:00
py.typed
quasirandom.py
random.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
README.txt
return_types.py
script.h
serialization.py Add option to serialization config to reduce random reads from get_record_offset when loading with mmap=True (#143880) 2025-01-31 17:09:20 +00:00
storage.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
torch_version.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
types.py Improve typing in torch/types.py (#145237) 2025-01-28 05:29:12 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.