Commit graph

22 commits

Author SHA1 Message Date
Natalia Gimelshein
45aa54d83c relax test deadlines
Summary: Relax test deadlines for c2 tests. We run on loaded machines, and timings are unreliable.

Test Plan: Fixes existing tests

Reviewed By: mruberry

Differential Revision: D28690006

fbshipit-source-id: 457707e81a1ec92548c1f23ea7a0022fa0a3bfda
2021-05-25 15:02:52 -07:00
Taylor Robie
6989eb60e5 Remove timeouts for C2 tests
Summary: When run on very heavily loaded machines, some of these tests are timing out. It's not an issue with the test, it's an issue with the environment. I've removed the timeout so we at least keep unit test coverage.

Test Plan: N/A: Fix breakages

Reviewed By: ngimel

Differential Revision: D28492334

fbshipit-source-id: aed3ee371763161aab2d356f5623c7df053fda6f
2021-05-17 16:39:30 -07:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Christopher Whelan
5cd0f5e8ec [PyFI] Update hypothesis and switch from tp2 (#41645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41645

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1405

Test Plan: buck test

Reviewed By: thatch

Differential Revision: D20323893

fbshipit-source-id: 54665d589568c4198e96a27f0ed8e5b41df7b86b
2020-08-08 12:13:04 -07:00
Huan Gui
be757957ba Support softmax with D == 0 (#29167)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29167

As titled.

This fix is crucial as multi_channel splitting would create history that has no items (i.e., D == 0), which leads to flow failure.

Test Plan:
Unittest

flow test:

before fix: f148783160

after fix: f149082299

buck test mode/dev-nosan caffe2/caffe2/python/operator_test:softmax_ops_test

Reviewed By: xianjiec

Differential Revision: D18296081

fbshipit-source-id: e0bb2dc2c4e5b465e213f31e5c5ced3a7e1fd574
2019-11-11 00:46:10 -08:00
rohithkrn
aa88c2c0b6 Unify gpu_support variable in python tests (#16748)
Summary:
Assign `has_gpu_support = has_cuda_support or has_hip_support` and make according changes in python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16748

Differential Revision: D13983132

Pulled By: bddppq

fbshipit-source-id: ca496fd8c6ae3549b736bebd3ace7fa20a6dad7f
2019-02-07 00:29:51 -08:00
Ansha Yu
3b1a5a1b8a Refactor tests part 2 (#11811)
Summary:
Followup to the [first refactor](https://github.com/pytorch/pytorch/pull/11350). Increase coverage of tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11811

Reviewed By: houseroad

Differential Revision: D9923074

Pulled By: ajyu

fbshipit-source-id: 0f899bb9e9a75bf7ed939e06cc9b028daa7f6bd9
2018-09-19 10:09:28 -07:00
Yan Zhu
8364470e5c fix expty batch for softmax (#9075)
Summary:
Closes https://github.com/pytorch/pytorch/pull/9075

as title

Reviewed By: QueryConnectionException

Differential Revision: D8710616

fbshipit-source-id: ca505e1a733cc24db9e2ab83a5395c64fa8360c4
2018-07-01 16:40:14 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Aapo Kyrola
667b8347a2 stabilize softmax_ops_test
Summary: softmax_ops_test occasionally fails with gradient checks. Stabilize by setting the numpy random seed. Also reduce some dimensions for the large input test to make it run faster.

Reviewed By: harouwu

Differential Revision: D5292106

fbshipit-source-id: a21eec89e18d30ac7c5609dacf5d413e841841a6
2017-06-22 13:50:32 -07:00
Anmol Kalia
7f98dc28cb Refactored spatial softmax
Summary: Refactored SoftmaxWithLoss by removing the code for spatial=1 mode and created a new op SpatialSoftmaxWithLoss that has the spatial mode implemented.

Reviewed By: viswanathgs

Differential Revision: D5104120

fbshipit-source-id: 8ab999e32c916b2a39a670a7b2a3365401535f24
2017-05-26 14:50:43 -07:00
Yury Zemlyanskiy
3abd0cb623 Add axis argument to SoftmaxWithLoss
Summary: ##axis## argument for SoftmaxWithLoss (it doesn't yet work for spatial case).

Reviewed By: akyrola

Differential Revision: D5025797

fbshipit-source-id: 9e3cf39223af3f2c8bb357f8d9fe952b7349f913
2017-05-09 19:36:00 -07:00
Aapo Kyrola
23183b9642 memory-saving only_loss argument for SoftmaxWithLoss
Summary: When only_loss=True is enabled, the softmax output buffer is shared with the gradient buffer (which is of same size). Added tests for this. Only for GPU version for now.

Reviewed By: salexspb

Differential Revision: D4843991

fbshipit-source-id: 834d2a1b357d784e4d64efe484f893442201ad6a
2017-04-06 13:04:31 -07:00
Aapo Kyrola
cf201ebac8 support axis for cudnn softmax
Summary: Added the support of axis for cudnn version of softmax + added cudnn tests to the softmax_ops_test

Reviewed By: urikz

Differential Revision: D4835409

fbshipit-source-id: 9150b969237e38daebff961fee3c36759f834ac4
2017-04-05 14:06:03 -07:00
Aapo Kyrola
ecd3bda44e Fix Softmax for CUDA
Summary:
Following jamesr66a's brilliant observation, this diff fixes the non-CUDNN versions of Softmax. The op did not take into account that blocks can run in parallel, and thus could overwrite each others values, particularly the "row max" that is important for numerical stability

So in this diff:
1) SoftmaxOp now shares all the code with SoftmaxWithLoss, that had better implementation

+ Strengthen the test case and renaming of file.

Reviewed By: jamesr66a

Differential Revision: D4832929

fbshipit-source-id: 4a1bfa2106ceb65ec75f5b868323ee1e7a3457fb
2017-04-05 10:07:54 -07:00
Aapo Kyrola
8421bf7c60 Faster softmaxWithLoss rowMaxKernel
Summary:
We did not parallelize over D, which can be very large, especially in RNN models. This speeds up significantly, with my quick test in lstm_benchmark and nvprof, the time of RowMaxKernel dropped from 1.2s total to 0.28s total.

+ addded softmaxwithloss to the lstm_benchmark

Reviewed By: jamesr66a

Differential Revision: D4800629

fbshipit-source-id: 3400ea1064b1eb2793bc403df2c1b68801d545e5
2017-03-30 15:49:46 -07:00
Amy Zhang
5c007be804 add soft label functionality to softmax with loss op
Differential Revision: D4527240

fbshipit-source-id: 548bf943857adb8f198348cc5b17ec52dc65bd2e
2017-02-10 09:01:53 -08:00
Aapo Kyrola
06398e9bfb softmax-with-loss, handle gracefully cases when total weight is 0
Summary:
Spatial Softmax allows specifying locations that are not counted for the loss. If none of the locations are counted, this resulted in NaNs, and headache. This diff fixes that by explicitly handling these cases.

+ assertion for label blob dimension(0)

Created a new test as well.

Differential Revision: D4442939

fbshipit-source-id: 8641bfad2a994e517ca3eda39345380a6ca1ba50
2017-01-20 15:29:21 -08:00
Yangqing Jia
5eb836880d Add unittest.main() lines to test scripts under python/operator_test
Summary:
Needed by oss.

This is done by running the following line:

  find . -name "*_test.py" -exec sed -i '$ a \\nif __name__ == "__main__":\n    import unittest\n    unittest.main()' {} \;

Reviewed By: ajtulloch

Differential Revision: D4223848

fbshipit-source-id: ef4696e9701d45962134841165c53e76a2e19233
2016-11-29 15:18:37 -08:00
Yangqing Jia
589398950f fbsync at f5a877 2016-11-18 15:41:06 -08:00
Yangqing Jia
238ceab825 fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00