mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: (Updated per review feedback) `torch.floor_divide` is currently a function that can operate on two tensors or a tensor and a scalar (scalar x scalar floor division is handled natively by Python and the JIT has a builtin function for it). This PR updates it to: - have an out variant: `floor_divide(x, y, out=z)` - be a method on a tensor: `x.floor_divide(y)` - have an in-place variant: `x.floor_divide_(y)` - work with sparse tensors Tests are added to test_sparse.py and test_torch.py for these new behaviors. In addition, this PR: - cleans up the existing sparse division and true_division code and improves their error message - adds testing of sparse true_division to test_sparse.py - extends existing floor_divide testing in test_torch to run on CUDA, too, not just the CPU Unfortunately, making floor_divide a method requires breaking backwards compatibility, and floor_divide has been added to the BC whitelist since this is international. The BC issue is that the first parameter name to torch.floor_divide is changing from input to self. If you previously called torch.floor_divide with keyword arguments, e.g. torch.floor_divide(input=x, other=y), you will need to update to torch.floor_divide(self=x, other=y), or the more common torch.floor_divide(x, y). The intent of this PR is to allow floor_divide to be substituted for division (torch.div, /) wherever division was previously used. In 1.6 we expect torch.div to perform true_division, and floor_divide is how users can continue to perform integer division with tensors. There are two potential follow-up issues suggested by this PR: - the test framework might benefit from additional tensor construction classes, like one to create dividends and divisors for multiple dtypes - the test framework might benefit from a universal function test class. while methods have reasonable coverage as part of test_torch.py's TestTensorOp tests, function coverage is spotty. Universal functions are similar enough it should be possible to generate tests for them. Pull Request resolved: https://github.com/pytorch/pytorch/pull/34552 Differential Revision: D20509850 Pulled By: mruberry fbshipit-source-id: 2cd3c828aad67191c77f2ed8470411e246f604f8 |
||
|---|---|---|
| .. | ||
| backward_compatibility | ||
| bottleneck_test | ||
| cpp | ||
| cpp_api_parity | ||
| cpp_extensions | ||
| custom_operator | ||
| distributed | ||
| error_messages | ||
| expect | ||
| jit | ||
| mobile | ||
| onnx | ||
| optim | ||
| scripts | ||
| type_hint_tests | ||
| HowToWriteTestsUsingFileCheck.md | ||
| run_test.py | ||
| simulate_nccl_errors.py | ||
| te_utils.py | ||
| test_autograd.py | ||
| test_complex.py | ||
| test_cpp_api_parity.py | ||
| test_cpp_extensions_aot.py | ||
| test_cpp_extensions_jit.py | ||
| test_cuda.py | ||
| test_cuda_primary_ctx.py | ||
| test_dataloader.py | ||
| test_determination.py | ||
| test_distributions.py | ||
| test_docs_coverage.py | ||
| test_expecttest.py | ||
| test_fake_quant.py | ||
| test_function_schema.py | ||
| test_indexing.py | ||
| test_jit.py | ||
| test_jit_disabled.py | ||
| test_jit_fuser.py | ||
| test_jit_fuser_legacy.py | ||
| test_jit_legacy.py | ||
| test_jit_py3.py | ||
| test_jit_simple.py | ||
| test_jit_string.py | ||
| test_logging.py | ||
| test_mkldnn.py | ||
| test_multiprocessing.py | ||
| test_multiprocessing_spawn.py | ||
| test_namedtensor.py | ||
| test_namedtuple_return_api.py | ||
| test_nn.py | ||
| test_numba_integration.py | ||
| test_optim.py | ||
| test_overrides.py | ||
| test_qat.py | ||
| test_quantization.py | ||
| test_quantized.py | ||
| test_quantized_models.py | ||
| test_quantized_nn_mods.py | ||
| test_quantized_tensor.py | ||
| test_serialization.py | ||
| test_sparse.py | ||
| test_tensorboard.py | ||
| test_tensorexpr.py | ||
| test_throughput_benchmark.py | ||
| test_torch.py | ||
| test_type_hints.py | ||
| test_type_info.py | ||
| test_type_promotion.py | ||
| test_utils.py | ||
| test_xnnpack_integration.py | ||