| .. |
|
backward_compatibility
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
bottleneck
|
|
|
|
cpp
|
C++ API parity: AdaptiveMaxPool2d
|
2019-09-27 12:41:38 -07:00 |
|
cpp_api_parity
|
Improve C++ maxpool and avgpool (#26521)
|
2019-09-25 13:52:58 -07:00 |
|
cpp_extensions
|
Delete backwards compatibility Backend overload for registerOp (#25914)
|
2019-09-25 07:21:44 -07:00 |
|
custom_operator
|
|
|
|
data
|
|
|
|
error_messages
|
|
|
|
expect
|
|
|
|
jit
|
|
|
|
onnx
|
export baddbmm (#26901)
|
2019-09-26 22:53:21 -07:00 |
|
optim
|
|
|
|
test_module
|
|
|
|
common_cuda.py
|
|
|
|
common_device_type.py
|
Lets generic tests use multiple devices (#26594)
|
2019-09-25 10:16:22 -07:00 |
|
common_distributed.py
|
Multiple fixes to test_c10d.py. (#25441)
|
2019-08-30 18:22:58 -07:00 |
|
common_methods_invocations.py
|
Fix nuclear norm with requires_grad=True (#26303)
|
2019-09-26 12:08:25 -07:00 |
|
common_nn.py
|
Allow batch size of 0 in Conv
|
2019-09-23 14:47:29 -07:00 |
|
common_quantization.py
|
Add more inplace arguments to quantization top level API (#26782)
|
2019-09-26 00:07:07 -07:00 |
|
common_quantized.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
common_utils.py
|
Serialization for per channel qtensor (#26339)
|
2019-09-23 13:28:11 -07:00 |
|
expecttest.py
|
|
|
|
HowToWriteTestsUsingFileCheck.md
|
|
|
|
hypothesis_utils.py
|
Try to disable annoying hypothesis warnings again (#26853)
|
2019-09-25 20:21:58 -07:00 |
|
jit_utils.py
|
add CondValue to unify refinements and code emission (#26145)
|
2019-09-23 14:24:18 -07:00 |
|
run_test.py
|
Makes test_indexing.py device generic (#26634)
|
2019-09-23 11:52:48 -07:00 |
|
simulate_nccl_errors.py
|
|
|
|
test_autograd.py
|
enable double backward for non-cudnn LSTM and GRU (#26660)
|
2019-09-25 17:38:18 -07:00 |
|
test_c10d.py
|
Add bitwise distributed reduction ops (#26824)
|
2019-09-26 08:09:49 -07:00 |
|
test_c10d_spawn.py
|
Build torch.distributed with Gloo backend on macOS (#25260)
|
2019-09-05 07:09:50 -07:00 |
|
test_cpp_api_parity.py
|
Improve C++ maxpool and avgpool (#26521)
|
2019-09-25 13:52:58 -07:00 |
|
test_cpp_extensions.py
|
|
|
|
test_cuda.py
|
Moves more tests to TestTorchDeviceType (#26435)
|
2019-09-19 01:49:34 -07:00 |
|
test_cuda_primary_ctx.py
|
|
|
|
test_dataloader.py
|
Fix no auto batching bugs: cannot bulk load; not work with namedtuple (#26065)
|
2019-09-16 07:22:31 -07:00 |
|
test_dist_autograd.py
|
Attach 'send' autograd function to the autograd graph as part of RPC. (#24876)
|
2019-09-01 23:54:01 -07:00 |
|
test_distributed.py
|
Make scatter/gather arguments optional (#25575)
|
2019-09-03 12:27:05 -07:00 |
|
test_distributions.py
|
Enables _do_cuda_non_default_stream (#25989)
|
2019-09-11 16:00:50 -07:00 |
|
test_docs_coverage.py
|
expose parse_schema and __eq__ function to python and add round trip tests (#23208)
|
2019-09-06 15:50:56 -07:00 |
|
test_expecttest.py
|
|
|
|
test_fake_quant.py
|
Serialization and range reduction support for Fake Quant/Observer (#26519)
|
2019-09-27 10:09:39 -07:00 |
|
test_function_schema.py
|
Add isBackwardCompatibleWith for Argument and FunctionSchema (#23409)
|
2019-09-13 20:37:14 -07:00 |
|
test_indexing.py
|
Makes test_indexing.py device generic (#26634)
|
2019-09-23 11:52:48 -07:00 |
|
test_jit.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_jit_disabled.py
|
|
|
|
test_jit_fuser.py
|
autodiff changes to enable profiling
|
2019-09-25 10:11:44 -07:00 |
|
test_jit_py3.py
|
Make jit dicts ordered (#26465)
|
2019-09-19 15:09:02 -07:00 |
|
test_jit_string.py
|
|
|
|
test_logging.py
|
|
|
|
test_mkldnn.py
|
|
|
|
test_multiprocessing.py
|
|
|
|
test_multiprocessing_spawn.py
|
|
|
|
test_namedtensor.py
|
Named tensor support for: index_fill_, index_fill, squeeze, median(Tensor) (#26914)
|
2019-09-27 12:28:49 -07:00 |
|
test_namedtuple_return_api.py
|
|
|
|
test_nccl.py
|
|
|
|
test_nn.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_numba_integration.py
|
|
|
|
test_optim.py
|
Resolve #25605 cyclic reference in _LRScheduler (#25776)
|
2019-09-18 06:08:35 -07:00 |
|
test_qat.py
|
Add torch.backends.mkldnn.enabled flag (#25459)
|
2019-09-11 12:09:40 -07:00 |
|
test_quantization.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_quantized.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_quantized_models.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_quantized_nn_mods.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_quantized_tensor.py
|
quantized_tensor tests (#26784)
|
2019-09-25 10:33:30 -07:00 |
|
test_quantizer.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_rpc.py
|
Added test case for reinit (#26506)
|
2019-09-24 16:39:33 -07:00 |
|
test_sparse.py
|
Implement multiple dispatch (#26468) (#26501)
|
2019-09-20 10:12:04 -07:00 |
|
test_tensorboard.py
|
fix flaky test (#26395)
|
2019-09-19 11:13:31 -07:00 |
|
test_throughput_benchmark.py
|
|
|
|
test_torch.py
|
Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
|
2019-09-27 13:45:15 -07:00 |
|
test_type_hints.py
|
|
|
|
test_type_info.py
|
|
|
|
test_type_promotion.py
|
Add torch.can_cast(from, to) function (#26805)
|
2019-09-27 08:40:34 -07:00 |
|
test_utils.py
|
Hub improvements (#26723)
|
2019-09-25 08:21:50 -07:00 |