pytorch/torch/csrc/autograd
Simon Fan 387c993c3b [ca] remove private API: _compiled_autograd_should_lift (#146720)
Since the functional autograd + compiled autograd migration, we don't trace into nodes anymore, and everything is lifted. We can't support this flag which tries to inline make_fx style in CA initial pass. There's no more usage internally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146720
Approved by: https://github.com/zou3519
2025-02-10 04:29:57 +00:00
..
functions functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
utils
anomaly_mode.cpp
anomaly_mode.h
autograd.cpp
autograd.h
autograd_meta.cpp
autograd_not_implemented_fallback.cpp [14/N] Fix extra warnings brought by clang-tidy-17 (#141644) 2024-12-13 06:22:13 +00:00
autograd_not_implemented_fallback.h
cpp_hook.cpp
cpp_hook.h
custom_function.cpp functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
custom_function.h [ca] no longer require is_traceable annotations for c++ autograd functions (#146229) 2025-02-05 08:49:17 +00:00
edge.h
engine.cpp Revert "[Environment Variable][7/N] Use thread-safe getenv functions (#140211)" 2025-02-03 22:04:28 +00:00
engine.h Cleanup CallOnce.h (#146700) 2025-02-07 16:44:45 +00:00
forward_grad.cpp
forward_grad.h
function.cpp
function.h functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
function_hook.h functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
FunctionsManual.cpp Support NJT chunk() backward on batch dim (#144584) 2025-01-18 15:58:24 +00:00
FunctionsManual.h c10::string_view -> std::string_view in autograd (#142354) 2024-12-10 15:43:41 +00:00
grad_mode.h
graph_task.h
InferenceMode.h
init.cpp update _unsafe_set_version_counter to accept lists of tensors (#137921) 2025-02-04 04:51:11 +00:00
input_buffer.cpp [3/N] Apply bugprone-unchecked-optional-access (#142442) 2024-12-11 01:39:10 +00:00
input_buffer.h
input_metadata.cpp
input_metadata.h
jit_decomp_interface.cpp
jit_decomp_interface.h
profiler.h
profiler_kineto.cpp allow profiling on all threads via experimentalConfig (#143659) 2024-12-23 20:41:27 +00:00
profiler_kineto.h
profiler_legacy.cpp
profiler_legacy.h
profiler_python.cpp Fix assertion failure in pytorch profiler (#143940) 2024-12-31 01:43:04 +00:00
profiler_python.h
python_anomaly_mode.cpp
python_anomaly_mode.h
python_autograd.h
python_cpp_function.cpp
python_cpp_function.h Expose several APIs to public (torch python APIs) (#144525) 2025-01-15 14:34:45 +00:00
python_engine.cpp Remove some NOLINT (#146610) 2025-02-07 01:50:06 +00:00
python_engine.h
python_enum_tag.h
python_fft_functions.h
python_function.cpp [ca] remove private API: _compiled_autograd_should_lift (#146720) 2025-02-10 04:29:57 +00:00
python_function.h [ca] remove private API: _compiled_autograd_should_lift (#146720) 2025-02-10 04:29:57 +00:00
python_hook.cpp
python_hook.h
python_legacy_variable.cpp
python_legacy_variable.h
python_linalg_functions.h
python_nested_functions.h
python_nested_functions_manual.cpp
python_nn_functions.h
python_saved_variable_hooks.cpp
python_saved_variable_hooks.h
python_sparse_functions.h
python_special_functions.h
python_torch_functions.h
python_torch_functions_manual.cpp Revert "Make functionalization ViewMeta serializable with pickle. (#143712)" 2025-01-17 00:52:50 +00:00
python_variable.cpp Remove unneeded std::make_optional (#143575) 2024-12-31 03:08:47 +00:00
python_variable.h
python_variable_indexing.cpp
python_variable_indexing.h
README.md
record_function_ops.cpp
record_function_ops.h
saved_variable.cpp
saved_variable.h
saved_variable_hooks.h
symbolic.h
TraceTypeManual.cpp
variable.cpp
variable.h [14/N] Fix extra warnings brought by clang-tidy-17 (#141644) 2024-12-13 06:22:13 +00:00
variable_info.cpp
variable_info.h
VariableTypeManual.cpp
VariableTypeUtils.h c10::string_view -> std::string_view in autograd (#142354) 2024-12-10 15:43:41 +00:00

Autograd

Autograd is a hotspot for PyTorch performance, so most of the heavy lifting is implemented in C++. This implies that we have to do some shuffling between Python and C++; and in general, we want data to be in a form that is convenient to manipulate from C++.

Our general model is that for any key data type that autograd manipulates, there are two implementations: a C++ type and a Python object type. For example, consider variables in autograd: we have both Variable in variable.h (the C++ type) and THPVariable in python_variable.h (the Python type.) (By the way, THP stands for TorcH Python, not to be confused with THPP, TorcH C++). Variable contains the payload of a variable, while THPVariable just contains a shared_ptr reference to Variable, as well as references to other Python objects which the Python runtime needs to know about. A lot of data accessor implementations in python_variable.cpp simply reach through to the underlying Variable and return the appropriate value.

The most complicated application of this principle is Function, which also supports users implementing custom behavior in Python. We have the following classes:

  • Node in function.h, the C++ type.
  • THPFunction in python_function.h, the Python object type. In python_function.cpp, you can see the boilerplate that tells the Python interpreter about this object.
  • PyNode in python_function.h, a subclass of Node which forwards apply to a Python THPFunction. (NOT a Python object, despite its name!)

Outside of PyNode, the C++ objects largely avoid referencing Python objects (there are a few exceptions, like pyobj in Variable, and PyNode, whose whole point is to let C++ call into Python). And pyobj in Node to ensure uniqueness of the associated python wrapper (if it exists).