pytorch/torch/csrc
Zhengxu Chen cd6c0707a8 [aoti] Assign proxy call args by name, and support default values. (#146263)
Fixing the following issue when compiling the following program:
```
                window = torch.hann_window(N_FFT).to(x.device)
                stft = torch.stft(
                    x, N_FFT, HOP_LENGTH, window=window, return_complex=True
                )
                magnitudes = stft[..., :-1].abs() ** 2
                return magnitudes
```
```
Traceback (most recent call last):
  File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 57, in testPartExecutor
    yield
  File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 623, in run
    self._callTestMethod(testMethod)
  File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 579, in _callTestMethod
    if method() is not None:
       ^^^^^^^^
  File "/home/zhxchen17/pytorch/torch/testing/_internal/common_utils.py", line 3120, in wrapper
    method(*args, **kwargs)
  File "/home/zhxchen17/pytorch/test/inductor/test_torchinductor.py", line 12356, in new_test
    return value(self)
           ^^^^^^^^^^^
  File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor.py", line 4334, in test_stft
    self.check_model(model, example_inputs)
  File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 185, in check_model
    actual = AOTIRunnerUtil.run(
             ^^^^^^^^^^^^^^^^^^^
  File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 137, in run
    optimized = AOTIRunnerUtil.load(device, so_path)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 119, in load
    return torch._export.aot_load(so_path, device)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zhxchen17/pytorch/torch/_export/__init__.py", line 165, in aot_load
    runner = torch._C._aoti.AOTIModelContainerRunnerCuda(so_path, 1, device)  # type: ignore[assignment, call-arg]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected extern kernel aten::hann_window to have serialized argument type as_scalar_type for argument 1 but got as_device
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146263
Approved by: https://github.com/angelayi
2025-02-05 15:43:05 +00:00
..
api [2/N] Remove NOLINT suppressions (#146402) 2025-02-05 08:38:52 +00:00
autograd [ca] no longer require is_traceable annotations for c++ autograd functions (#146229) 2025-02-05 08:49:17 +00:00
cpu [CPUInductor] Fix SVE256 detection (#146207) 2025-02-01 18:51:34 +00:00
cuda Revert "[Environment Variable][7/N] Use thread-safe getenv functions (#140211)" 2025-02-03 22:04:28 +00:00
deploy
distributed Remove NOLINTNEXTLINE (#146238) 2025-02-04 02:45:32 +00:00
dynamo [ca] no longer require is_traceable annotations for c++ autograd functions (#146229) 2025-02-05 08:49:17 +00:00
export
functorch
fx
inductor [aoti] Assign proxy call args by name, and support default values. (#146263) 2025-02-05 15:43:05 +00:00
instruction_counter
jit [2/N] Remove NOLINT suppressions (#146402) 2025-02-05 08:38:52 +00:00
lazy Introduce cache clearing APIs for the lazy graph executor (#144489) 2025-01-29 17:38:01 +00:00
monitor
mps [2/N] Remove unnecessary once flag usage (#145057) 2025-01-23 09:48:46 +00:00
mtia [2/N] Remove unnecessary once flag usage (#145057) 2025-01-23 09:48:46 +00:00
multiprocessing
onnx
profiler Fix random crash in PyPer (#146327) 2025-02-04 04:50:40 +00:00
tensor expose extra torch_python apis (#144746) 2025-01-16 20:50:31 +00:00
utils [export] Additionally save pytree namedtuple field names (#145956) 2025-02-04 04:42:30 +00:00
xpu [2/N] Remove unnecessary once flag usage (#145057) 2025-01-23 09:48:46 +00:00
copy_utils.h
CudaIPCTypes.cpp Enable more readability-redundant checks (#143963) 2024-12-30 14:49:33 +00:00
CudaIPCTypes.h
DataLoader.cpp
DataLoader.h
Device.cpp c10::string_view -> std::string_view in Device.cpp (#144178) 2025-01-04 18:51:33 +00:00
Device.h
DeviceAccelerator.cpp
DeviceAccelerator.h
Dtype.cpp
Dtype.h
DynamicTypes.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
DynamicTypes.h Expose several APIs to public (torch python APIs) (#144525) 2025-01-15 14:34:45 +00:00
empty.c
Event.cpp
Event.h
Exceptions.cpp
Exceptions.h [BE] Introduce c10::SyntaxError (#144647) 2025-01-12 23:23:54 +00:00
Export.h
Generator.cpp [19/N] Fix extra warnings brought by clang-tidy-17 (#144448) 2025-01-09 15:58:05 +00:00
Generator.h [17/N] Fix extra warnings brought by clang-tidy-17 (#143804) 2024-12-25 19:54:42 +00:00
itt.cpp
itt_wrapper.cpp
itt_wrapper.h
Layout.cpp
Layout.h
MemoryFormat.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
MemoryFormat.h
Module.cpp Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-31 17:43:09 +00:00
Module.h
PyInterpreter.cpp
PyInterpreter.h
python_dimname.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
python_dimname.h
python_headers.h
QScheme.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
QScheme.h
README.md
serialization.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
serialization.h
Size.cpp Enable readability-redundant-declaration (#143982) 2024-12-31 00:20:10 +00:00
Size.h
Storage.cpp
Storage.h expose extra torch_python apis (#144746) 2025-01-16 20:50:31 +00:00
StorageMethods.cpp
StorageMethods.h
StorageSharing.cpp
StorageSharing.h
Stream.cpp Support with statement on torch.Stream (#140138) 2025-01-10 02:05:19 +00:00
Stream.h [1/N] OpenReg: Replace open_registration_extension.cpp with openreg (#141815) 2025-01-14 15:59:00 +00:00
stub.c
THConcat.h
THP.h
TypeInfo.cpp
TypeInfo.h
Types.h
utils.cpp [18/N] Fix extra warnings brought by clang-tidy-17 (#144014) 2025-01-08 17:21:55 +00:00
utils.h

csrc

The csrc directory contains all of the code concerned with integration with Python. This is in contrast to lib, which contains the Torch libraries that are Python agnostic. csrc depends on lib, but not vice versa.

There are a number of utilities for easing integration with Python which are worth knowing about, which we briefly describe here. But the most important gotchas:

  • DO NOT forget to take out the GIL with pybind11::gil_scoped_acquire before calling Python API or bringing a THPObjectPtr into scope.

  • Make sure you include Python.h first in your header files, before any system headers; otherwise, you will get error: "_XOPEN_SOURCE" redefined error. If you pay attention to warnings, you will see where you need to do this.

Notes

Note [Storage is not nullptr]

Historically, Torch supported nullptr storage, as a minor optimization to avoid having to allocate a storage object when it would be empty. However, this is actually a confusing special case to deal with, so by-in-large, PyTorch assumes that, in fact, storage is never nullptr.

One important case where this assumption is important is when tracking the CUDA device a tensor is stored in: this information is stored solely in the storage, so if a storage is nullptr, we lose this information.

Although storage is never nullptr, the data field of c10::StorageImpl may be nullptr. This mostly occurs when we want to pre-allocate an output tensor struct, but then have it be resized and filled with data by some operator: there's no point in allocating data for it in this case!

Files

Exceptions.h

Frequently when working with the Python API, you may call a function which returns an error. In this case, we want to return directly to the Python interpreter, so that this exception can be propagated accordingly; however, because the Python API is C-based, what actually will happen is it will return control to whatever C++ code called it. Similarly, if we raise a C++ exception, prior to returning to the Python interpreter, we must set the Python error flags, so it turns into a C++ exception.

Moreover, when using the following macros, the generated warnings will be converted into python warnings that can be caught by the user.

Exceptions define helpers for two main cases:

  • For code where you write the python binding by hand, HANDLE_TH_ERRORS, END_HANDLE_TH_ERRORS and an exception class python_error. You call them like this:
// Entry point from Python interpreter
PyObject* run(PyObject* arg) {
  HANDLE_TH_ERRORS
  ...
  if (!x) throw python_error();
  // From c10/Exception.h
  TORCH_CHECK(cond, "cond was false here");
  TORCH_WARN("Warning message");
  ...
  END_HANDLE_TH_ERRORS
}

The HANDLE_TH_ERRORS macro will catch all exceptions and convert them into an appropriate Python signal. python_error is a special exception which doesn't contain any info, instead it says, "An error occurred in the Python API; if you return to the interpreter, Python will raise that exception, nothing else needs to be done."

  • For code that you bind using pybind, HANDLE_TH_ERRORS and END_HANDLE_TH_ERRORS_PYBIND can be used. They will work jointly with pybind error handling to raise pytorch errors and warnings natively and let pybind handle other errors. It can be used as:
// Function given to the pybind binding
at::Tensor foo(at::Tensor x) {
  HANDLE_TH_ERRORS
  ...
  if (!x) throw python_error();
  // pybind native error
  if (!x) throw py::value_error();
  // From c10/Exception.h
  TORCH_CHECK(cond, "cond was false here");
  TORCH_WARN("Warning message");
  ...
  END_HANDLE_TH_ERRORS_PYBIND
}

GIL

Whenever you make any calls to the Python API, you must have taken out the Python GIL, as none of these calls are thread safe. pybind11::gil_scoped_acquire is a RAII struct which handles taking and releasing the GIL. Use it like this:

void iWantToUsePython() {
  pybind11::gil_scoped_acquire gil;
  ...
}

In general, the compiler will NOT warn you if you use Python functionality without taking out the GIL, so DO NOT FORGET this call.

utils/object_ptr.h

THPPointer is a smart pointer class analogous to std::shared_ptr, but which is overloaded to handle reference counting scheme of various objects which are not based on shared_ptr. The most important overloads are:

  • PyObject (so important we've aliased it as THPObjectPtr), which hooks into Python reference counting. (By the way, that means you MUST take out the GIL before bringing one of these into scope!)

  • The various TH tensor and storage types (e.g., THTensor), which hook into TH's reference counting. (TH's reference counting IS thread safe, no locks necessary.)