2022-01-21 15:32:59 +00:00
|
|
|
#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
|
2018-01-17 23:27:42 +00:00
|
|
|
// ${generated_comment}
|
|
|
|
|
|
|
|
|
|
// Python bindings for torch.* functions implemented through ATen.
|
|
|
|
|
//
|
|
|
|
|
// The functions are bound as static methods on a class
|
2018-03-02 17:19:44 +00:00
|
|
|
// torch._C._VariableFunctions which is also aliased as Variable._torch
|
|
|
|
|
// and also copied into 'torch' module.
|
2018-01-17 23:27:42 +00:00
|
|
|
|
|
|
|
|
#include <Python.h>
|
|
|
|
|
|
Implement copysign (#46396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46396
Related #38349
[numpy](https://numpy.org/doc/stable/reference/generated/numpy.copysign.html?highlight=copysign#numpy.copysign)
- No in-place function
- No method
- Optional output
- Available: byte, char, bool, int, short, long, float, double, half
- Integral promoted to float
- Not available: float/double complex
`c = np.copysign(a, b)`
| a | b | c | a.grad |
| -1 | -1 | -1 | 1 |
| -0 | -1 | -0 | 0 |
| 0 | -1 | -0 | 0 |
| 1 | -1 | -1 | -1 |
| -1 | -0 | -1 | 1 |
| -0 | -0 | 0 | 0 |
| 0 | -0 | 0 | 0 |
| 1 | -0 | -1 | -1 |
| -1 | 0 | 1 | -1 |
| -0 | 0 | 0 | 0 |
| 0 | 0 | 0 | 0 |
| 1 | 0 | 1 | 1 |
| -1 | 1 | 1 | -1 |
| -0 | 1 | 0 | 0 |
| 0 | 1 | 0 | 0 |
| 1 | 1 | 1 | 1 |
This function becomes **non-differentiable** at `a=0` for any `b`. So, in my opinion, we may set the gradient for `a=0` to 0.
TODO:
- [x] test (cpu/gpu)
- [x] doc
- [x] ~kernel_vec~
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D24401366
Pulled By: ejguan
fbshipit-source-id: 3621c5ff74b185376a3705589983bb5197ab896d
2020-11-04 16:07:03 +00:00
|
|
|
// Undefine the copysign macro so that at::copysign works as intended with MSVC
|
|
|
|
|
// https://github.com/python/cpython/blob/c60394c7fc9cc09b16e9675a3eeb5844b6d8523f/PC/pyconfig.h#L196
|
|
|
|
|
#ifdef _MSC_VER
|
|
|
|
|
#undef copysign
|
|
|
|
|
#endif // _MSC_VER
|
|
|
|
|
|
2021-08-25 22:05:14 +00:00
|
|
|
#include "torch/csrc/autograd/python_torch_functions.h"
|
2018-06-16 07:40:35 +00:00
|
|
|
#include "torch/csrc/autograd/python_variable.h"
|
|
|
|
|
#include "torch/csrc/autograd/utils/wrap_outputs.h"
|
Introduce torch.layout and split layout from dtypes. (#6145)
* Introduce torch.layout and split layout from dtypes.
Tensors (and tensor types) now have a 'layout' attribute that returns either 'torch.strided' or 'torch.sparse_coo'.
Previously, dtypes were 1-to-1 with ATen types/PyTensorTypes; the impetus behind this decision was to make things easy in the common case
(i.e. specifying a type in a factory function). But this doesn't really follow for sparity, which isn't a common case.
It also doesn't properly represent the concept or a dtype, which in numpy are proper scalar types (i.e. roughly the type returned from indexing the
last dimension of an n-d array). But this should be the same whether or not the tensor is represented via strides, sparsity, etc.
This is accomplished by:
1) having the dtype of tensor return the (device-type, scalar-type) combination, i.e. torch.cuda.float32, so both
torch.cuda.FloatTensor and torch.cuda.sparse.FloatTensor have the same dtype
2) Adding a layout parameter to python functions, where the combination of (dtype, layout) maps to an ATen type that is used for dispatch.
* Formatting, make init throw python_error.
* Fix cuda not enabled error message.
* Fix test.
2018-04-02 18:07:50 +00:00
|
|
|
#include "torch/csrc/Dtype.h"
|
|
|
|
|
#include "torch/csrc/DynamicTypes.h"
|
2018-01-17 23:27:42 +00:00
|
|
|
#include "torch/csrc/Exceptions.h"
|
2021-01-06 01:15:37 +00:00
|
|
|
#include "torch/csrc/utils/out_types.h"
|
2019-12-04 21:16:01 +00:00
|
|
|
#include "torch/csrc/utils/pybind.h"
|
2020-10-20 21:59:49 +00:00
|
|
|
#include "torch/csrc/utils/pycfunction_helpers.h"
|
2018-01-17 23:27:42 +00:00
|
|
|
#include "torch/csrc/utils/python_arg_parser.h"
|
2018-06-16 07:40:35 +00:00
|
|
|
#include "torch/csrc/utils/tensor_layouts.h"
|
2018-01-22 23:14:22 +00:00
|
|
|
#include "torch/csrc/utils/tensor_new.h"
|
2018-01-17 23:27:42 +00:00
|
|
|
#include "torch/csrc/utils/tensor_numpy.h"
|
2020-02-27 20:18:24 +00:00
|
|
|
#include "torch/csrc/jit/frontend/tracer.h"
|
2018-06-16 07:40:35 +00:00
|
|
|
#include "torch/csrc/autograd/generated/variable_factories.h"
|
Customize the printing of namedtuple return (#17136)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17112
```python
print("good", torch.randn(5,5,5).max(1))
print("terrible", torch.randn(5,5,10).max(1))
print("not as good", torch.randn(5,5,500).max(1))
print ("old behaviour = gold standard")
print(tuple(torch.randn(5,5,5).max(1)))
print(tuple(torch.randn(5,5,10).max(1)))
print(tuple(torch.randn(5,5,500).max(1)))
```
now gives
```
>>> import torch
>>> print("good", torch.randn(5,5,5).max(1))
good torch.return_types.max(
values=tensor([[ 1.2821, 1.8063, 1.8075, 1.3082, -0.1267],
[ 0.3437, 0.7353, 1.2619, 0.7557, 1.6662],
[ 0.8583, 1.8906, 1.0246, 1.7598, 1.1184],
[ 1.7821, 0.0230, 0.9452, 1.0318, 1.0823],
[ 0.4116, -0.0379, -0.1843, 1.4129, 1.8796]]),
indices=tensor([[4, 4, 3, 2, 1],
[1, 2, 4, 1, 1],
[2, 4, 0, 2, 1],
[0, 2, 0, 3, 1],
[0, 4, 4, 4, 4]]))
>>> print("terrible", torch.randn(5,5,10).max(1))
terrible torch.return_types.max(
values=tensor([[ 2.1272, 1.3664, 2.2067, 1.3974, -0.0883, 1.2505, 1.0074, 1.1217,
0.3849, 0.6936],
[ 0.6288, -0.4560, 1.2748, 1.5482, 1.2777, 1.6874, 0.7151, 0.6041,
1.3572, 1.6232],
[ 1.6703, 1.0075, 1.6480, 2.2839, 1.3390, 0.4938, 1.6449, 1.7628,
0.8141, 2.5714],
[ 0.7079, 1.8677, 3.2478, 1.5591, 2.4870, 0.8635, -0.1450, 1.6923,
1.4924, 1.6298],
[ 2.4056, 0.8002, 0.9317, 0.7455, 0.7866, 2.1191, 0.3492, 1.2095,
1.8637, 1.7470]]),
indices=tensor([[1, 1, 0, 0, 0, 0, 3, 4, 4, 4],
[4, 2, 2, 1, 2, 2, 3, 1, 1, 3],
[0, 3, 3, 0, 2, 1, 4, 1, 0, 1],
[4, 1, 3, 0, 3, 2, 0, 1, 4, 3],
[1, 0, 3, 2, 1, 0, 0, 1, 0, 1]]))
>>> print("not as good", torch.randn(5,5,500).max(1))
not as good torch.return_types.max(
values=tensor([[ 0.3877, 0.7873, 1.8701, ..., 0.5971, 1.6103, -0.3435],
[ 1.1300, 2.2418, 1.4239, ..., 1.3943, 0.3872, 1.6475],
[ 2.0656, 1.3136, 0.9896, ..., 2.3918, 0.8226, 1.0517],
[ 1.1054, 0.9945, 1.0561, ..., 2.1039, 1.1524, 3.0304],
[ 1.5041, 2.2809, 1.0883, ..., 0.8504, 2.4774, 1.1041]]),
indices=tensor([[4, 3, 1, ..., 1, 4, 0],
[4, 4, 4, ..., 3, 0, 3],
[3, 0, 1, ..., 2, 2, 4],
[0, 1, 1, ..., 4, 2, 2],
[1, 0, 4, ..., 2, 0, 2]]))
>>> print ("old behaviour = gold standard")
old behaviour = gold standard
>>> print(tuple(torch.randn(5,5,5).max(1)))
(tensor([[ 1.1908, 1.1807, 1.3151, 1.7184, 0.3556],
[ 0.3798, 0.9213, 0.3001, 1.3087, 2.2419],
[ 1.4233, 1.4814, 1.9900, 1.7744, 1.3059],
[ 1.0026, -0.0330, 1.3061, 1.8730, 2.0685],
[ 1.3041, 1.6458, 1.3449, 1.8948, 3.6206]]), tensor([[0, 4, 3, 4, 0],
[1, 1, 4, 0, 4],
[4, 1, 0, 3, 3],
[1, 2, 1, 4, 0],
[3, 3, 0, 3, 3]]))
>>> print(tuple(torch.randn(5,5,10).max(1)))
(tensor([[-0.1232, 0.8275, 0.6732, 1.1223, 0.8247, 1.2851, 1.6009, 1.9979,
1.9109, 0.7313],
[ 0.2260, 0.5922, 1.6928, 0.6024, 2.1158, 3.0619, 0.5653, 0.7426,
0.8316, 0.6346],
[ 0.4319, 0.2231, 0.5255, 1.7620, 1.1657, 0.8875, 0.5782, 0.6506,
0.5032, 1.7097],
[ 0.4137, 1.7265, 1.4260, 2.0301, 1.2244, 0.7128, 2.6345, 0.7230,
1.3553, 1.6508],
[ 1.0684, 1.7195, 1.4068, 0.7076, -0.0242, 0.8474, 0.8754, 1.7108,
0.2188, 1.1584]]), tensor([[0, 1, 3, 4, 2, 3, 4, 2, 1, 0],
[1, 4, 0, 0, 3, 2, 0, 0, 3, 3],
[2, 3, 1, 1, 4, 0, 1, 4, 4, 4],
[0, 4, 1, 3, 2, 0, 2, 0, 3, 1],
[1, 0, 0, 0, 0, 3, 3, 3, 2, 0]]))
>>> print(tuple(torch.randn(5,5,500).max(1)))
(tensor([[0.9395, 1.5572, 1.8797, ..., 2.0494, 0.8202, 0.9623],
[1.7937, 0.7225, 1.8836, ..., 0.7927, 1.4976, 1.1813],
[0.8558, 1.6943, 1.4192, ..., 0.8327, 1.9661, 0.4197],
[1.2993, 1.4995, 0.9357, ..., 0.7810, 1.3030, 2.6216],
[1.4206, 1.8315, 1.0338, ..., 1.4312, 1.3198, 1.5233]]), tensor([[0, 4, 3, ..., 3, 0, 2],
[0, 1, 0, ..., 0, 4, 3],
[3, 4, 3, ..., 3, 0, 0],
[3, 2, 3, ..., 1, 2, 1],
[1, 2, 4, ..., 3, 1, 3]]))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17136
Differential Revision: D14250021
Pulled By: VitalyFedyunin
fbshipit-source-id: aae72f03b35980063b1ac1f07b8353eddb0c8b93
2019-02-28 20:59:34 +00:00
|
|
|
#include "torch/csrc/utils/structseq.h"
|
2020-01-30 08:26:16 +00:00
|
|
|
#include "torch/csrc/utils/cuda_lazy_init.h"
|
2023-09-06 18:13:23 +00:00
|
|
|
#include "torch/csrc/autograd/python_return_types.h"
|
2018-06-16 07:40:35 +00:00
|
|
|
|
2022-01-21 15:32:59 +00:00
|
|
|
#include <ATen/core/Tensor.h>
|
|
|
|
|
|
|
|
|
|
#ifndef AT_PER_OPERATOR_HEADERS
|
|
|
|
|
#include <ATen/Functions.h>
|
|
|
|
|
#else
|
|
|
|
|
$ops_headers
|
|
|
|
|
#endif
|
2018-01-17 23:27:42 +00:00
|
|
|
|
2018-04-27 19:11:45 +00:00
|
|
|
#include <functional>
|
2018-06-16 07:40:35 +00:00
|
|
|
#include <initializer_list>
|
|
|
|
|
#include <stdexcept>
|
|
|
|
|
#include <utility>
|
2018-01-17 23:27:42 +00:00
|
|
|
|
|
|
|
|
using at::Tensor;
|
2018-06-16 07:40:35 +00:00
|
|
|
using at::Device;
|
2020-03-12 14:16:02 +00:00
|
|
|
using at::Layout;
|
2018-01-17 23:27:42 +00:00
|
|
|
using at::Scalar;
|
|
|
|
|
using at::ScalarType;
|
|
|
|
|
using at::Backend;
|
New implementations of DeviceGuard, StreamGuard and MultiStreamGuard (with CUDA specializations) (#13342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13342
This PR introduces a few new concepts:
- DeviceGuardImplInterface, and implementations for CPU and CUDA, which
provide a generic interface for interfacing with device and stream state,
without requiring a direct dependency on the code in question.
- InlineDeviceGuard, a general template for generating both specialized
and dynamically dispatched device guard implementations. Dynamic
dispatch is done by specializing it on a VirtualGuardImpl.
- Provide a device-independent DeviceGuard class, which can be used even
from CPU code. It uses the aforementioned dynamic dispatch.
- CUDA-specialized CUDAGuard class, which doesn't have a dynamic dispatch
but can only be used from CUDA.
- StreamGuard, which is the same as above, but for streams rather than
devices.
- Optional variants of all the aforementioned guards, which are a no-op if
no device/stream is specified
- CUDAMultiStreamGuard, specifically for the case when we want to set
a device on every guard.
There are some subtle semantic changes, which have been thoroughly documented
in the class definition.
BC-breaking changes:
- Move constructor/assignment have been removed from all device guard
implementations.
- In some cases where you previously wrote 'set_device' (or 'set_stream'), you now must write
'reset_device', because if you switch devices/device types, the stream/device on the
previous device is unset. This is different from previous behavior.
- CUDAGuard no longer handles streams, or multiple streams. Use CUDAStreamGuard
or CUDAMultiStreamGuard as appropriate for your use case.
Reviewed By: dzhulgakov
Differential Revision: D12849620
fbshipit-source-id: f61956256f0b12be754b3234fcc73c2abc1be04e
2018-11-11 20:08:57 +00:00
|
|
|
using at::OptionalDeviceGuard;
|
2018-06-16 07:40:35 +00:00
|
|
|
using at::DeviceGuard;
|
|
|
|
|
using at::TensorOptions;
|
2020-01-30 08:26:16 +00:00
|
|
|
using at::IntArrayRef;
|
|
|
|
|
using at::Generator;
|
|
|
|
|
using at::TensorList;
|
|
|
|
|
using at::Dimname;
|
2020-02-19 16:17:16 +00:00
|
|
|
using at::DimnameList;
|
2020-09-25 19:54:21 +00:00
|
|
|
using at::ArrayRef;
|
2018-05-18 17:45:10 +00:00
|
|
|
|
2021-01-06 01:15:37 +00:00
|
|
|
using torch::utils::check_out_type_matches;
|
2018-01-17 23:27:42 +00:00
|
|
|
using namespace torch::autograd::utils;
|
|
|
|
|
|
2021-08-25 22:05:14 +00:00
|
|
|
// NOTE: See [Sharded File] comment in VariableType
|
Deprecates current torch.full integral type inference, adds torch.full complex type inference (#34709)
Summary:
Per title.
Currently torch.full will always (attempt to) produce a float tensor. This is inconsistent with NumPy in (at least) two cases:
- When integral fill values (including bool) are given
- When complex fill values are given
For example:
```
np.full((1, 2), 1).dtype
: dtype('int64')
np.full((1, 2), (1 + 1j)).dtype
: dtype('complex128')
```
Whereas in PyTorch
```
torch.full((1, 2), 1).dtype
: torch.float32
torch.full((1, 2), (1 + 1j)).dtype
: RuntimeError: value cannot be converted to type float without overflow: (1,1)
```
This PR begins the process of deprecating our current behavior of returning float tensors (by default) when given integer fill values by warning the user that integer fill values will require explicitly specifying the dtype or out kwargs in 1.6, and in 1.7 the behavior will change to return a LongTensor by default (BoolTensor for bool values). The intermediate 1.6 release is to prevent changing the behavior silently and unexpectedly.
The PR also implements inference for complex types. So that with it:
```
torch.full((1, 2), (1 + 1j)).dtype
: torch.complex64
```
The complex type inference returns a ComplexFloat tensor when given a complex fill value (and no dtype or out kwarg is specified), unless the default dtype is Double, in which case a ComplexDouble tensor is returned.
A test for these behaviors is added to test_torch.py.
Implementation note:
This PR required customizing full's dispatch because currently in eager codegen the TensorOptions object passed to functions improperly sets has_dtype() to true, even if the user did not explicitly provide a dtype. torch.arange already worked around this issue with its own custom implementation. The JIT, however, does pass a properly constructed TensorOptions object.
Future Work:
This PR does not extend torch.full's complex type inference to ONNX. This seems unlikely to come up and will be a clear error if it does. When integer type inference is added to torch.full, however, then porting the behavior to ONNX may be warranted. torch.arange ported its complex type promotion logic to ONNX, for example.
Additionally, this PR mostly leaves existing call sites in PyTorch that would trigger this warning intact. This is to be more minimal (since the PR is BC breaking). I will submit a separate PR fixing PyTorch's call sites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34709
Differential Revision: D20509387
Pulled By: mruberry
fbshipit-source-id: 129593ba06a1662032bbbf8056975eaa59baf933
2020-03-18 19:15:43 +00:00
|
|
|
|
2021-08-25 22:05:14 +00:00
|
|
|
namespace torch { namespace autograd {
|
2021-06-10 18:58:23 +00:00
|
|
|
|
2019-12-04 21:16:01 +00:00
|
|
|
// generated forward declarations start here
|
2019-10-09 18:42:15 +00:00
|
|
|
|
2020-01-30 08:26:16 +00:00
|
|
|
${py_forwards}
|
2019-10-09 18:42:15 +00:00
|
|
|
|
2021-08-25 22:05:14 +00:00
|
|
|
static PyMethodDef torch_functions_shard[] = {
|
2018-01-17 23:27:42 +00:00
|
|
|
${py_method_defs}
|
|
|
|
|
};
|
|
|
|
|
|
2021-08-25 22:05:14 +00:00
|
|
|
void gatherTorchFunctions${shard_id}(std::vector<PyMethodDef> &torch_functions) {
|
|
|
|
|
constexpr size_t num_functions = sizeof(torch_functions_shard) / sizeof(torch_functions_shard[0]);
|
|
|
|
|
torch_functions.insert(
|
|
|
|
|
torch_functions.end(),
|
|
|
|
|
torch_functions_shard,
|
|
|
|
|
torch_functions_shard + num_functions);
|
2019-12-04 21:16:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// generated methods start here
|
|
|
|
|
|
|
|
|
|
${py_methods}
|
|
|
|
|
|
2018-01-17 23:27:42 +00:00
|
|
|
}} // namespace torch::autograd
|