Summary: Relax test deadlines for c2 tests. We run on loaded machines, and timings are unreliable.
Test Plan: Fixes existing tests
Reviewed By: mruberry
Differential Revision: D28690006
fbshipit-source-id: 457707e81a1ec92548c1f23ea7a0022fa0a3bfda
Summary: When run on very heavily loaded machines, some of these tests are timing out. It's not an issue with the test, it's an issue with the environment. I've removed the timeout so we at least keep unit test coverage.
Test Plan: N/A: Fix breakages
Reviewed By: ngimel
Differential Revision: D28492334
fbshipit-source-id: aed3ee371763161aab2d356f5623c7df053fda6f
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29167
As titled.
This fix is crucial as multi_channel splitting would create history that has no items (i.e., D == 0), which leads to flow failure.
Test Plan:
Unittest
flow test:
before fix: f148783160
after fix: f149082299
buck test mode/dev-nosan caffe2/caffe2/python/operator_test:softmax_ops_test
Reviewed By: xianjiec
Differential Revision: D18296081
fbshipit-source-id: e0bb2dc2c4e5b465e213f31e5c5ced3a7e1fd574
Summary: softmax_ops_test occasionally fails with gradient checks. Stabilize by setting the numpy random seed. Also reduce some dimensions for the large input test to make it run faster.
Reviewed By: harouwu
Differential Revision: D5292106
fbshipit-source-id: a21eec89e18d30ac7c5609dacf5d413e841841a6
Summary: Refactored SoftmaxWithLoss by removing the code for spatial=1 mode and created a new op SpatialSoftmaxWithLoss that has the spatial mode implemented.
Reviewed By: viswanathgs
Differential Revision: D5104120
fbshipit-source-id: 8ab999e32c916b2a39a670a7b2a3365401535f24
Summary: When only_loss=True is enabled, the softmax output buffer is shared with the gradient buffer (which is of same size). Added tests for this. Only for GPU version for now.
Reviewed By: salexspb
Differential Revision: D4843991
fbshipit-source-id: 834d2a1b357d784e4d64efe484f893442201ad6a
Summary: Added the support of axis for cudnn version of softmax + added cudnn tests to the softmax_ops_test
Reviewed By: urikz
Differential Revision: D4835409
fbshipit-source-id: 9150b969237e38daebff961fee3c36759f834ac4
Summary:
Following jamesr66a's brilliant observation, this diff fixes the non-CUDNN versions of Softmax. The op did not take into account that blocks can run in parallel, and thus could overwrite each others values, particularly the "row max" that is important for numerical stability
So in this diff:
1) SoftmaxOp now shares all the code with SoftmaxWithLoss, that had better implementation
+ Strengthen the test case and renaming of file.
Reviewed By: jamesr66a
Differential Revision: D4832929
fbshipit-source-id: 4a1bfa2106ceb65ec75f5b868323ee1e7a3457fb
Summary:
We did not parallelize over D, which can be very large, especially in RNN models. This speeds up significantly, with my quick test in lstm_benchmark and nvprof, the time of RowMaxKernel dropped from 1.2s total to 0.28s total.
+ addded softmaxwithloss to the lstm_benchmark
Reviewed By: jamesr66a
Differential Revision: D4800629
fbshipit-source-id: 3400ea1064b1eb2793bc403df2c1b68801d545e5
Summary:
Spatial Softmax allows specifying locations that are not counted for the loss. If none of the locations are counted, this resulted in NaNs, and headache. This diff fixes that by explicitly handling these cases.
+ assertion for label blob dimension(0)
Created a new test as well.
Differential Revision: D4442939
fbshipit-source-id: 8641bfad2a994e517ca3eda39345380a6ca1ba50
Summary:
Needed by oss.
This is done by running the following line:
find . -name "*_test.py" -exec sed -i '$ a \\nif __name__ == "__main__":\n import unittest\n unittest.main()' {} \;
Reviewed By: ajtulloch
Differential Revision: D4223848
fbshipit-source-id: ef4696e9701d45962134841165c53e76a2e19233