pytorch/docs/source
Michael Carilli 3b040c478a Make custom_fwd a no-op when not executed under autocast (#36171)
Summary:
Currently, a custom autograd function written with
```
torch.cuda.amp.custom_fwd(cast_inputs=dtype)
def forward(ctx, *args):
    ...
```
casts incoming floating-point CUDA tensors to `dtype` unconditionally, regardless of whether the function executes in an autocast-enabled region.  I think I had the wrong idea there.  Autocast-disabled regions should give the user control of input types.  Also, `custom_fwd(cast_inputs=dtype)`-decorated functions' behavior should align with native fp32list/fp16list functions.  C++-side casting wrappers have no effect when autocast is disabled, and  `custom_fwd`'s casting should behave the same way.

The present PR changes `custom_fwd` so it only casts in autocast-enabled regions (also updates custom_fwd to ignore fp64 inputs, like the C++ wrappers).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36171

Differential Revision: D22179511

Pulled By: ngimel

fbshipit-source-id: 5a93d070179a43206066bce19da0a5a19ecaabbd
2020-06-23 10:23:02 -07:00
..
_static
_templates
_templates-stable
community
notes Make custom_fwd a no-op when not executed under autocast (#36171) 2020-06-23 10:23:02 -07:00
rpc
scripts
__config__.rst
amp.rst Make custom_fwd a no-op when not executed under autocast (#36171) 2020-06-23 10:23:02 -07:00
autograd.rst [RELAND2] Change AccumulateGrad to yield .grads that match weights' memory layout (#40358) 2020-06-22 17:13:21 -07:00
bottleneck.rst
checkpoint.rst
conf.py Meta tensors, but without code deduplication (#38490) 2020-06-22 09:18:33 -07:00
cpp_extension.rst correct some cpp extension code usages and documents (#39766) 2020-06-10 08:31:22 -07:00
cpp_index.rst
cuda.rst
cudnn_persistent_rnn.rst
data.rst
distributed.rst
distributions.rst
dlpack.rst
docutils.conf
futures.rst Adding torch.futures to API docs (#40051) 2020-06-17 17:55:48 -07:00
hub.rst
index.rst Adding torch.futures to API docs (#40051) 2020-06-17 17:55:48 -07:00
jit.rst
jit_builtin_functions.rst
jit_language_reference.rst
jit_python_reference.rst [JIT] Add support for with statements (#34705) 2020-06-18 16:57:18 -07:00
jit_unsupported.rst [JIT] Make torch.unique_consecutive compatible (#39339) 2020-06-02 14:54:29 -07:00
math-quantizer-equation.png
model_zoo.rst
multiprocessing.rst
name_inference.rst Add arcosh, arcsinh and arctanh to unary ops (#38388) 2020-06-04 11:40:55 -07:00
named_tensor.rst
nn.functional.rst
nn.init.rst
nn.rst
onnx.rst [ONNX] Update pytorch/onnx docs for new export API args (#39802) 2020-06-19 13:38:47 -07:00
optim.rst
packages.rst
quantization.rst quant docs: add and clean up ELU (#40377) 2020-06-23 09:02:43 -07:00
random.rst Remove duplicated entries in random.rst (#39725) 2020-06-10 16:51:15 -07:00
rpc.rst Improve RPC documents (#40296) 2020-06-19 15:34:49 -07:00
sparse.rst
storage.rst
tensor_attributes.rst
tensor_view.rst Add view_as_real, view_as_complex for complex tensors (#39099) 2020-06-22 15:15:27 -07:00
tensorboard.rst
tensors.rst Add view_as_real, view_as_complex for complex tensors (#39099) 2020-06-22 15:15:27 -07:00
torch.rst Add view_as_real, view_as_complex for complex tensors (#39099) 2020-06-22 15:15:27 -07:00
type_info.rst