pytorch/torch/nn
2024-11-15 23:26:23 +00:00
..
attention Add some error messages for flexattention (#138891) 2024-11-13 04:05:29 +00:00
backends
intrinsic
modules Document the parameter (hx) that RNN actually uses (#140575) 2024-11-14 14:45:17 +00:00
parallel Revert "Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() (#127690)" 2024-11-05 23:10:38 +00:00
qat
quantizable
quantized
utils Add APIs to separate norm calculation and gradient scaling in nn.utils.clip_grad_norm_ (#139662) 2024-11-07 23:13:23 +00:00
__init__.py
_reduction.py
common_types.py
cpp.py
functional.py Add Weighted Loss Functions to PyTorch : WMSE, WMAE, and Weighted Huber Loss (#132049) 2024-10-31 21:59:43 +00:00
functional.pyi.in
grad.py
init.py
parameter.py remove typo in UninitializedParameter docstring (#140197) 2024-11-15 23:26:23 +00:00
parameter.pyi [BE]: Update Typeguard to TypeIs for better type inference (#133814) 2024-10-26 15:07:13 +00:00