pytorch/docs/source
Gao, Xiang bcec8cc3f9 Add amax/amin (#43092)
Summary:
Add a max/min operator that only return values.

## Some important decision to discuss
| **Question**                          | **Current State** |
|---------------------------------------|-------------------|
| Expose torch.max_values to python?    | No                |
| Remove max_values and only keep amax? | Yes               |
| Should amax support named tensors?    | Not in this PR    |

## Numpy compatibility

Reference: https://numpy.org/doc/stable/reference/generated/numpy.amax.html

| Parameter                                                                                                                                                                                                                                              | PyTorch Behavior                                                                  |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| `axis`:  None or int or tuple of ints, optional. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. | Named `dim`, behavior same as `torch.sum` (https://github.com/pytorch/pytorch/issues/29137)                                |
| `out`: ndarray, optional. Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output.                                                                                                   | Same                                                                              |
| `keepdims`: bool, optional. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.                                      | implemented as `keepdim`                                                          |
| `initial`: scalar, optional. The minimum value of an output element. Must be present to allow computation on empty slice.                                                                                                                              | Not implemented in this PR. Better to implement for all reductions in the future. |
| `where`: array_like of bool, optional. Elements to compare for the maximum.                                                                                                                                                                            | Not implemented in this PR. Better to implement for all reductions in the future. |

**Note from numpy:**
> NaN values are propagated, that is if at least one item is NaN, the corresponding max value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmax.

PyTorch has the same behavior

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43092

Reviewed By: ngimel

Differential Revision: D23360705

Pulled By: mruberry

fbshipit-source-id: 5bdeb08a2465836764a5a6fc1a6cc370ae1ec09d
2020-08-28 12:51:03 -07:00
..
_static
_templates
_templates-stable
community Add MSFT Owners to the Windows Maintainership (#42280) 2020-07-30 08:22:13 -07:00
notes Correct the windows docs (#43479) 2020-08-25 13:41:24 -07:00
rpc
scripts Optimize SiLU (Swish) op in PyTorch (#42976) 2020-08-16 13:21:57 -07:00
__config__.rst
amp.rst Added index_put to promotelist (#41035) 2020-07-07 20:36:55 -07:00
autograd.rst Update docs feature classifications (#39966) 2020-06-24 15:35:59 -07:00
bottleneck.rst
checkpoint.rst
complex_numbers.rst Doc note for complex (#41252) 2020-07-16 08:53:27 -07:00
conf.py restore old documentation references (#39086) 2020-07-09 15:20:10 -07:00
cpp_extension.rst
cpp_index.rst
cuda.rst
cudnn_persistent_rnn.rst
cudnn_rnn_determinism.rst [doc] Add LSTM non-deterministic workaround (#40893) 2020-07-21 16:20:02 -07:00
data.rst add prefetch_factor for multiprocessing prefetching process (#41130) 2020-07-24 08:38:13 -07:00
distributed.rst Add a link in RPC doc page to point to PT Distributed overview (#41108) 2020-07-08 14:00:05 -07:00
distributions.rst
dlpack.rst
docutils.conf
fft.rst Adds fft namespace (#41911) 2020-08-06 00:20:50 -07:00
futures.rst
hub.rst
index.rst Adds torch.linalg namespace (#42664) 2020-08-07 10:18:30 -07:00
jit.rst Grammatical corrections (#43473) 2020-08-25 12:09:14 -07:00
jit_builtin_functions.rst
jit_language_reference.rst
jit_python_reference.rst
jit_unsupported.rst
linalg.rst Adds linalg.det alias, fixes outer alias, updates alias testing (#42802) 2020-08-11 21:48:31 -07:00
math-quantizer-equation.png
mobile_optimizer.rst optimize_for_mobile: bring packed params to root module (#42740) 2020-08-08 15:53:20 -07:00
model_zoo.rst
multiprocessing.rst
name_inference.rst
named_tensor.rst Easier english updated tech docs (#42016) 2020-07-24 14:36:17 -07:00
nn.functional.rst Added SiLU activation function (#41034) 2020-07-10 07:37:30 -07:00
nn.init.rst
nn.rst Add Unflatten Module (#41564) 2020-07-21 07:43:02 -07:00
onnx.rst
optim.rst DOC: fail to build if there are warnings (#41335) 2020-07-28 22:33:44 -07:00
quantization-support.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
quantization.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
random.rst
rpc.rst Minor RPC doc fixes (#43337) 2020-08-20 14:17:07 -07:00
sparse.rst Update docs feature classifications (#39966) 2020-06-24 15:35:59 -07:00
storage.rst
tensor_attributes.rst Update docs feature classifications (#39966) 2020-06-24 15:35:59 -07:00
tensor_view.rst Adds movedim method, fixes movedim docs, fixes view doc links (#43122) 2020-08-17 14:24:52 -07:00
tensorboard.rst
tensors.rst Add amax/amin (#43092) 2020-08-28 12:51:03 -07:00
torch.nn.intrinsic.qat.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.nn.intrinsic.quantized.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.nn.intrinsic.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.nn.qat.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.nn.quantized.dynamic.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.nn.quantized.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.quantization.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
torch.rst Add amax/amin (#43092) 2020-08-28 12:51:03 -07:00
type_info.rst DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00