mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: This PR enables autodiff to use the forward/backward graph compiled from python code, instead of using symbolic gradients(modifying the original graph directly). We put the map in a separate .h file for now to wait for the native_functions.yaml and derivatives.yaml merge. This should ideally go into native_functions.yaml eventually. This PR should be enough to unblock us for now, we can start writing gradients for aten functions in python. Differential Revision: D13494635 Pulled By: ailzhang fbshipit-source-id: f8d51a15243ac46afd09d930c573ccdfcd9fdaaf |
||
|---|---|---|
| .. | ||
| bottleneck | ||
| cpp | ||
| cpp_extensions | ||
| custom_operator | ||
| data | ||
| error_messages | ||
| expect | ||
| ffi/src | ||
| onnx | ||
| optim | ||
| test_module | ||
| common_cuda.py | ||
| common_methods_invocations.py | ||
| common_nn.py | ||
| common_utils.py | ||
| expecttest.py | ||
| run_test.py | ||
| test_autograd.py | ||
| test_c10d.py | ||
| test_cpp_extensions.py | ||
| test_cuda.py | ||
| test_cuda_primary_ctx.py | ||
| test_dataloader.py | ||
| test_distributed.py | ||
| test_distributions.py | ||
| test_expecttest.py | ||
| test_indexing.py | ||
| test_indexing_cuda.py | ||
| test_jit.py | ||
| test_multiprocessing.py | ||
| test_multiprocessing_spawn.py | ||
| test_nccl.py | ||
| test_nn.py | ||
| test_numba_integration.py | ||
| test_optim.py | ||
| test_sparse.py | ||
| test_thd_distributed.py | ||
| test_torch.py | ||
| test_type_info.py | ||
| test_utils.py | ||