Commit graph

2780 commits

Author SHA1 Message Date
Mike Ruberry
b6f4bb0a70 Revert D23236088: [pytorch][PR] [caffe2] adds Cancel to SafeDequeueBlobsOp and SafeEnqueueBlobsOp
Test Plan: revert-hammer

Differential Revision:
D23236088 (0ccc38b773)

Original commit changeset: daa90d9ee324

fbshipit-source-id: 933c7deab177250075683a9bea143ac37f16a598
2020-09-16 23:32:50 -07:00
Danny Huang
0ccc38b773 [caffe2] adds Cancel to SafeDequeueBlobsOp and SafeEnqueueBlobsOp (#44495)
Summary:
## Motivation

* To be able to make C2 ops cancellable so we can safely exit.
* Some C2 operators are now blocking thus being non-cancellable. If an error
  occurs we need to be able to safely stop all net execution so we can throw
  the exception to the caller.

* When an error occurs in a net or it got cancelled, running ops will have the
 `Cancel` method called.

* This diff adds `Cancel` method to the `SafeEnqueueBlobsOp`
and `SafeDequeueBlobsOp` to have the call queue->close() to force all the
 blocking ops to return.
* Adds unit test that verified the error propagation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44495

Test Plan:
## Unit Test added to verify that queue ops propagate errors
```
buck test caffe2/caffe2/python:hypothesis_test
```

Reviewed By: dzhulgakov

Differential Revision: D23236088

Pulled By: dahsh

fbshipit-source-id: daa90d9ee32483fb51195e269a52cf5987bb0a5a
2020-09-16 18:17:34 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Nikita Shulga
1718b16d15 [Caffe2] gcs_cuda_only is trivial if CUDA not available (#44578)
Summary:
Make `gcs_cuda_only` and `gcs_gpu_only` return empty device lists if CUDA/GPU(CUDA or RocM) not available

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44578

Reviewed By: walterddr

Differential Revision: D23664227

Pulled By: malfet

fbshipit-source-id: 176b5d964c0b02b8379777cd9a38698c11818690
2020-09-16 12:24:08 -07:00
Yan Xie
285ba0d068 Enable fp16 for UniformFill (#44540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44540

Support output type to be fp16 for UniformFill

Reviewed By: jianyuh

Differential Revision: D23558030

fbshipit-source-id: 53a5b2c92cfe78cd11f55e6ee498e1bd682fe4a1
2020-09-15 15:09:18 -07:00
Yan Xie
4ce6af35c4 Enable fp16 for CUDA SparseLengthsSum/Mean (#44089)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44089

Add support of fp16 as input type in SparseLengthSum/Mean caffe2 operator

Reviewed By: xianjiec

Differential Revision: D23436877

fbshipit-source-id: 02fbef2fde17d4b0abea9ca5d17a36aa989f98a0
2020-09-15 11:10:54 -07:00
Brandon Lin
ea55820606 [dper3] Export PackSegments and UnpackSegments to Pytorch
Summary: As title.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test/:torch_integration_test -- test_pack_segments
```

Reviewed By: yf225

Differential Revision: D23610495

fbshipit-source-id: bd8cb61f2284a08a54091a4f982f01fcf681f215
2020-09-11 09:29:24 -07:00
Gang Shen
058d7228ec Expose the interface of nesterov of SGD Optimizer from caffe2 to dper
Summary:
Expose the interface of `nesterov` of SGD Optimizer from caffe2 to dper.

dper sgd optimizer (https://fburl.com/diffusion/chpobg0h) has referred to NAG sgdoptimizer in caffe2: https://fburl.com/diffusion/uat2lnan. So just need to add the parameter 'nesterov' in dper sgd optimizer.

Analysis of run resutls: N345540.

- train_ne increases as momentum (m) decreases.
- for m=0.95, 0.9: eval_ne is lower with NAG than production (no NAG, m = 0.95).
- for m=0.99: eval_ne with or without NAG is higher than production. It indicates larger variance in validation and overfit in training (lower train_ne).

Test Plan:
1. unit tests:
`buck test caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test -- test_sgd_without_nesterov`
`buck test caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test -- test_sgd_with_nesterov`
.
1. build dper front end package: `flow-cli canary   ads.dper3.workflows.sparse_nn.train --mode opt --entitlement      ads_global --run-as-secure-group      team_ads_ml_ranking`. The build result (refreshed) is here https://www.internalfb.com/intern/buck/build/2a368b55-d94b-45c1-8617-2753fbce994b. Flow package version is ads_dper3.canary:856b545cc6b249c0bd328f845adeb0d2.
.
2. To build dper back end package: `flow-cli canary  dper.workflows.dper3.train --mode opt --entitlement      ads_global --run-as-secure-group      team_ads_ml_ranking`. The build result (refreshed) is here: https://www.internalfb.com/intern/buck/build/70fa91cd-bf6e-4a08-8a4d-41e41a77fb52. Flow package version is aml.dper2.canary:84123a34be914dfe86b1ffd9925869de.
.
3. Compare prod with NAG-enabled runs:
a) refreshed prod run (m=0.95): f213877098
NAG enabled run (m=0.95): f213887113
.
b) prod run (m=0.9): f214065288
NAG enabled run (m=0.9): f214066319
.
c) prod run (m=0.99): f214065804
NAG enabled run (m=0.99): f214066725
.
d) change date type of nestrov to `bool` and launched a validation run
NAG enabled (m=0.95): f214500597

Reviewed By: ustctf

Differential Revision: D23152229

fbshipit-source-id: 61703ef6b4e72277f4c73171640fb8afc6d31f3c
2020-09-09 19:37:00 -07:00
Danny Huang
5ee31308e6 [caffe2] exposes Net cancellation through pybind state (#44043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44043

To invoke `cancel` from the net instance in Python, we expose it through pybind state.

Reviewed By: dzhulgakov

Differential Revision: D23249660

fbshipit-source-id: 45a1e9062dca811746fcf2e5e42199da8f76bb54
2020-09-09 18:13:13 -07:00
Xiaomeng Yang
135ebbde6d [Caffe2] Add RMSNormOp (#44338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44338

Add RMSNormOp in Caffe2

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:rms_norm_op_test

Reviewed By: houseroad

Differential Revision: D23546424

fbshipit-source-id: 8f3940a0bb42230bfa647dc66b5e359cc84491c6
2020-09-08 23:50:44 -07:00
Brandon Lin
5de805d8a7 [dper3] Export Caffe2 operator LearningRate to PyTorch
Summary: Exports the operator to PyTorch, to be made into a low-level module.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_learning_rate
```

Reviewed By: yf225

Differential Revision: D23545582

fbshipit-source-id: 6b6d9aa6a47b2802ccef0f87c1263c6cc2d2fdf6
2020-09-08 08:50:09 -07:00
Chunli Fu
3699274ce2 [DPER3] AOT integration
Summary: Integrate aot flow with model exporter.

Test Plan:
buck test dper3/dper3_backend/delivery/tests:dper3_model_export_test

replayer test see D23407733

Reviewed By: ipiszy

Differential Revision: D23313689

fbshipit-source-id: 39ae8d578ed28ddd6510db959b65974a5ff62888
2020-09-04 18:37:22 -07:00
Jordan Fix
2f8a43341d Add API for onnxifi with AOT Glow ONNX (#44021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44021

Pull Request resolved: https://github.com/pytorch/glow/pull/4854

Test Plan: Added `test_onnxifi_aot.py`

Reviewed By: yinghai

Differential Revision: D23307003

fbshipit-source-id: e6d4f3e394f96fd22f80eb2b8a686cf8171a54c0
2020-09-03 22:46:20 -07:00
Lingyi Liu
bc64efae48 Back out "Revert D19987020: [pytorch][PR] Add the sls tensor train op" (#43938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43938

resubmit

Test Plan: unit test included

Reviewed By: mruberry

Differential Revision: D23443493

fbshipit-source-id: 7b68f8f7d1be58bee2154e9a498b5b6a09d11670
2020-09-01 11:42:12 -07:00
Mike Ruberry
cc52386096 Revert D19987020: [pytorch][PR] Add the sls tensor train op
Test Plan: revert-hammer

Differential Revision:
D19987020 (f31b111a35)

Original commit changeset: e3ca7b00a374

fbshipit-source-id: a600c747a45dfb51e0882196e382a21ccaa7b989
2020-08-29 12:46:11 -07:00
Lingyi Liu
f31b111a35 Add the sls tensor train op (#33525)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33525

Reviewed By: wx1988

Differential Revision: D19987020

Pulled By: lly-zero-one

fbshipit-source-id: e3ca7b00a374a75ee42716c4e6236bf168ebebf1
2020-08-29 12:16:44 -07:00
kshitij12345
c7787f7fbf [numpy compatibility]Fix argmin/argmax when multiple max/min values (#42004)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41998
Fixes https://github.com/pytorch/pytorch/issues/22853

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42004

Reviewed By: ngimel

Differential Revision: D23049003

Pulled By: mruberry

fbshipit-source-id: a6fddbadfec4b8696730550859395ce4f0cf50d6
2020-08-28 06:42:42 -07:00
Nikita Shulga
a91e1cedc5 Reduce number of hypothesis tests in CI (#43591)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43591

100 randomized inputs vs 50 doesn't change the balance that much but speed up test runtime

Test Plan: CI

Reviewed By: orionr, seemethere

Differential Revision: D23332393

fbshipit-source-id: 7a8ff9127ee3e045a83658a7a670a844f3862987
2020-08-26 11:54:49 -07:00
Chunli Fu
d70b263e3a [DPER3] Separate user embeddings and ad embeddings in blob reorder
Summary:
Separate user embeddings and ad embeddings in blobsOrder. New order:
1. meta_net_def
2. preload_blobs
3. user_embeddings (embeddings in remote request only net)
4. ad_embeddings (embeddings in remote other net)

Add a field requestOnlyEmbeddings in meta_net_def to record user_embeddings.

This is for flash verification.

Test Plan:
buck test dper3/dper3_backend/delivery/tests:blob_reorder_test

Run a flow with canary package f211282476
Check the net: n326826, request_only_embeddings are recorded as expected

Reviewed By: ipiszy

Differential Revision: D23008305

fbshipit-source-id: 9360ba3d078f205832821005e8f151b8314f0cf2
2020-08-22 23:40:04 -07:00
Priyanshu
c89d2c6bf2 Replace black_list with block_list (#42088)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42088

Reviewed By: pbelevich

Differential Revision: D22794582

Pulled By: SplitInfinity

fbshipit-source-id: e256353befefa2630b99f9bcf0b79df3a7a8dcbd
2020-08-20 14:34:02 -07:00
Sean Lynch
f9a766bb39 Increase deadline time for load_save tests (#43205)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43205

A number of tests that forward to `TestLoadSaveBase.load_save` are all marked as flaky due to them regularly taking much longer to start up than hypothesis' default timeout of 200ms. This diff fixes the problem by removing the timeout for `load_save`. This is alright as these tests aren't meant to be testing the performance of these operators.

I would set the deadline to 60s if I could however it appears the that caffe2 github CI uses a different version of hypothesis that doesn't allow using `dateutil.timedelta` so instead of trying to figure out an approach that works on both I've just removed the deadline time.

I've also tagged all existing tasks WRT these failures.

Differential Revision: D23175752

fbshipit-source-id: 324f9ff034df1ac4874797f04f50067149a6ba48
2020-08-20 08:41:24 -07:00
Edson Romero
5014cf4a4d Export MergeIdLists Caffe2 Operator to PyTorch
Summary: As titled.

Test Plan: buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_merge_id_lists

Reviewed By: yf225

Differential Revision: D23076951

fbshipit-source-id: c37dfd93003590eed70b0d46e0151397a402dde6
2020-08-14 14:46:17 -07:00
Hector Yuen
c8e789e06e add fake fp16 fusions to net transforms (#42927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42927

added fp16 fusion to net transforms
refactored the transforms as well as glow_transform to get out of opt/custom so that the OSS builds passed

Test Plan: added net runner tests for this

Reviewed By: yinghai

Differential Revision: D23080881

fbshipit-source-id: ee6451811fedfd07c6560c178229854bca29301f
2020-08-14 13:30:27 -07:00
Ren Chen
e182ec97b3 Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Summary:
1. Fix illegal memory access issue for SplitByLengths operator in the CUDA context.
2. Add support to scaling lengths vector for SplitByLengths operator.
3. Add support to test SplitByLengths operator in the CUDA context.

Example for SplitByLengths operator processing scaling lengths vector:
value vector A = [1, 2, 3, 4, 5, 6]
length vector B = [1, 2]
after execution of SplitByLengths operator,
the output should be [1,2] and [3,4,5,6]

Test Plan: buck test mode/dev-nosan caffe2/caffe2/python/operator_test:concat_split_op_test

Reviewed By: kennyhorror

Differential Revision: D23079841

fbshipit-source-id: 3700e7f2ee0a5a2791850071fdc16e5b054f8400
2020-08-14 01:04:08 -07:00
Christopher Whelan
7a9ae52550 [hypothesis] Deadline followup (#42842)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42842

Test Plan: `buck test`

Reviewed By: thatch

Differential Revision: D23045269

fbshipit-source-id: 8a3f4981869287a0f5fb3f0009e13548b7478086
2020-08-11 15:33:23 -07:00
Edson Romero
71dbfc79b3 Export BatchBucketOneHot Caffe2 Operator to PyTorch
Summary: As titled.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:torch_integration_test -- test_batch_bucket_one_hot_op
```

Reviewed By: yf225

Differential Revision: D23005981

fbshipit-source-id: 1daa8d3e7d6ad75e97e94964db95ccfb58541672
2020-08-11 14:00:19 -07:00
Mike Ruberry
ddcf3ded3e Revert D23002043: add net transforms for fusion
Test Plan: revert-hammer

Differential Revision:
D23002043 (a4b763bc2c)

Original commit changeset: f0b13d51d68c

fbshipit-source-id: d43602743af35db825e951358992e979283a26f6
2020-08-10 21:22:57 -07:00
Mike Ruberry
dedcc30c84 Fix ROCm CI by increasing test timeout (#42827)
Summary:
ROCm is failing to run this test in the allotted time. See, for example, https://app.circleci.com/pipelines/github/pytorch/pytorch/198759/workflows/f6066acf-b289-46c5-aad0-6f4f663ce820/jobs/6618625.

cc jeffdaily

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42827

Reviewed By: pbelevich

Differential Revision: D23042220

Pulled By: mruberry

fbshipit-source-id: 52b426b0733b7b52ac3b311466d5000334864a82
2020-08-10 20:26:20 -07:00
Hector Yuen
a4b763bc2c add net transforms for fusion (#42763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42763

add the fp16 fusions as net transforms:
-layernorm fused with mul+add
-swish int8

Test Plan: added unit test, ran flows

Reviewed By: yinghai

Differential Revision: D23002043

fbshipit-source-id: f0b13d51d68c240b05d2a237a7fb8273e996328b
2020-08-10 20:16:14 -07:00
Christopher Whelan
5cd0f5e8ec [PyFI] Update hypothesis and switch from tp2 (#41645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41645

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1405

Test Plan: buck test

Reviewed By: thatch

Differential Revision: D20323893

fbshipit-source-id: 54665d589568c4198e96a27f0ed8e5b41df7b86b
2020-08-08 12:13:04 -07:00
Venkata Chintapalli
e95fbaaba3 Adding Peter's Swish Op ULP analysis. (#42573)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42573

* Generate the ULP png files for different ranges.

Test Plan: test_op_ulp_error.py

Reviewed By: hyuen

Differential Revision: D22938572

fbshipit-source-id: 6374bef6d44c38e1141030d44029dee99112cd18
2020-08-07 19:13:01 -07:00
Edson Romero
2b04712205 Exposing Percentile Caffe2 Operator in PyTorch
Summary: As titled.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:torch_integration_test -- test_percentile
```

Reviewed By: yf225

Differential Revision: D22999896

fbshipit-source-id: 2e3686cb893dff1518d533cb3d78c92eb2a6efa5
2020-08-07 16:22:37 -07:00
Rui Liu
92b7347fd7 Enforce counter value to double type in rowwise_counter
Summary:
Enforce counter value to double type in rowwise_counter.

**Context:**
The existing implementation is using float type for counter value. But due to the precision limit of a floating number [1], we observed that the counter value can't increment beyond 16777216.0 (i.e., the max value is 16777216.0) in our earlier experiments. We decide to enforce double type to avoid this issue.

[1] https://stackoverflow.com/questions/12596695/why-does-a-float-variable-stop-incrementing-at-16777216-in-c

Test Plan:
op test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/python/operator_test(f0b0b48c)$ buck test :rowwise_counter_test
Trace available for this run at /tmp/testpilot.20200728-083200.729292.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision cd2638f1f47250eac058b8c36561760027d16add fbpkg f88726c8ebde4ba288e1172a348c7f46 at Mon Jul 27 18:11:43 2020 by twsvcscm from /usr/local/fbprojects/packages/testinfra.testpilot/887/t.par
Discovering tests
Running 1 test
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/7881299364977047
      ✓ caffe2/caffe2/python/operator_test:rowwise_counter_test - test_rowwise_counter (caffe2.caffe2.python.operator_test.rowwise_counter_test.TestRowWiseCounter) 0.265 1/1 (passed)
      ✓ caffe2/caffe2/python/operator_test:rowwise_counter_test - main 14.414 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7881299364977047
Summary (total time 18.51s):
  PASS: 2
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

optimizer test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/python(7d66fbb9)$ buck test :optimizer_test
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7036874434841896
Summary (total time 64.87s):
  PASS: 48
  FAIL: 0
  SKIP: 24
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestMomentumSgd)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestGFtrl)
    caffe2/caffe2/python:optimizer_test - test_caffe2_cpu_vs_numpy (caffe2.caffe2.python.optimizer_test.TestYellowFin)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestSparseRAdam)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestRowWiseAdagradWithCounter)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestAdagrad)
    caffe2/caffe2/python:optimizer_test - test_caffe2_gpu_vs_numpy (caffe2.caffe2.python.optimizer_test.TestYellowFin)
    caffe2/caffe2/python:optimizer_test - testDense (caffe2.caffe2.python.optimizer_test.TestRowWiseAdagrad)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestFtrl)
    caffe2/caffe2/python:optimizer_test - testSparse (caffe2.caffe2.python.optimizer_test.TestRmsProp)
    ...and 14 more not shown...
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

param download test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/fb/net_transforms/tests(7ef20a38)$ sudo buck test :param_download_test
Finished test run: Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/6473924481526935
```

e2e flow:
f208394929
f207991149
f207967273

ANP notebook to check the counter value loaded from the flows
https://fburl.com/anp/5fdcbnoi

screenshot of the loaded counter (note that counter max is larger than 16777216.0)

{F250926501}

Reviewed By: ellie-wen

Differential Revision: D22711514

fbshipit-source-id: 426fed7415270aa3f276dda8141907534734337f
2020-08-05 20:40:51 -07:00
Andres Suarez
9ea9d1b52e [fbs][2/n] Remove .python3 markers
Test Plan:
`xbgr '\.python3'` shows only one (dead) usage of this file:
https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/python/repo_stats/buck.py?commit=9a8dd3243207819325d520c208218f6ab69e4e49&lines=854

Reviewed By: lisroach

Differential Revision: D22955631

fbshipit-source-id: e686d9157c08c347d0ce4acdd05bd7ab29ff7df5
2020-08-05 18:25:50 -07:00
Mike Ruberry
24e2a8a171 Revert D22780307: Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Test Plan: revert-hammer

Differential Revision:
D22780307 (76905527fe)

Original commit changeset: c5ca60ae16b2

fbshipit-source-id: f3c99eec5f05121e2bed606fe2ba84a0be0cdf16
2020-08-05 12:47:56 -07:00
Ren Chen
76905527fe Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Summary:
1. Fix illegal memory access issue for SplitByLengths operator in the CUDA context.
2. Add support to scaling lengths vector for SplitByLengths operator.
3. Add support to test SplitByLengths operator in the CUDA context.

Example for SplitByLengths operator processing scaling lengths vector:
value vector A = [1, 2, 3, 4, 5, 6]
length vector B = [1, 2]
after execution of SplitByLengths operator,
the output should be [1,2] and [3,4,5,6]

Test Plan: buck test mode/dev-nosan caffe2/caffe2/python/operator_test:concat_split_op_test

Reviewed By: kennyhorror

Differential Revision: D22780307

fbshipit-source-id: c5ca60ae16b24032cedfa045a421503b713daa6c
2020-08-05 11:46:00 -07:00
Dmytro Dzhulgakov
06d978a9ad [c10/cuda] Reorganize device_count() and robustly surface ASAN warnings (#42249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42249

Main change is to bring Caffe2's superior error messages for cuda initialization into c10 and use them in all code paths.

Basic logic:

| Case | Call to device_count() | init_cuda, e.g. allocating tensor |
| -- | -- | -- |
| all good | non-zero | just works |
| no gpus | 0, no warning | throw exception with good message |
| driver issues | 0, produce warning | throw exception with good message |
| out of memory with ASAN | 0, produce warning| throw exception with ASAN message |

Previously, the error thrown from init_cuda was very generic and the ASAN warning (if any) was buried in the logs.

Other clean up changes:
* cache device_count() always in a static variable
* move all asan macros in c10

Test Plan:
Hard to unittest because of build modes. Verified manually that the behavior from the table above holds by running the following script in different modes (ASAN/no-ASAN, CUDA_VISIBLE_DEVICES=):

```
print('before import')
import torch
print('after import')
print('devices: ', torch.cuda.device_count())
x = torch.tensor([1,2,3])
print('tensor creation')
x = x.cuda()
print('moved to cuda')
```

Reviewed By: ngimel

Differential Revision: D22824329

fbshipit-source-id: 5314007313a3897fc955b02f8b21b661ae35fdf5
2020-08-05 11:39:31 -07:00
Yinghai Lu
8850fd1952 Add python inferface to create OfflineTensor (#42516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42516

att. We need it for some scripts.

Reviewed By: houseroad

Differential Revision: D22918112

fbshipit-source-id: 8a1696ceeeda67a34114bc57cb52c925711cfb4c
2020-08-04 01:31:34 -07:00
Yinghai Lu
dbdd28207c Expose a generic shape info struct for ONNXIFI Python interface (#42421)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42421

Previously, we can only feed shape info from Python with float dtype, and batch based dim type when we do onnxifi from Python. This diff removes this limitation and uses TensorBoundShapes protobuf as a generic shape info struct. This will make the onnxifi interface in Python more flexible.

Reviewed By: ChunliF

Differential Revision: D22889781

fbshipit-source-id: 1a89f3a68c215a0409738c425b4e0d0617d58245
2020-08-03 16:10:05 -07:00
Xing Wang
ebfff31e19 [distributedhogwild] Introducing new tags for distributed hogwild. (#42381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42381

Introduce new tag to support distributed hogwild.

Reviewed By: boryiingsu

Differential Revision: D20484099

fbshipit-source-id: 5973495589e0a7ab185d3867b37437aa747f408a
2020-08-03 07:10:44 -07:00
Xiaomeng Yang
5769b06ab5 [Caffe2] Remove explicitly divide by zero in SpatialBN training mode (#42380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42380

[Caffe2] Remove explicitly divide by zero in SpatialBN training mode

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:spatial_bn_op_test

Reviewed By: houseroad

Differential Revision: D22873214

fbshipit-source-id: 70b505391b5db02b45fc46ecd7feb303e50c6280
2020-08-01 11:54:58 -07:00
Yan Xie
bdd9ef1981 Support RowWiseSparseAdam on GPU (#35404)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35404

Implement RowWiseSparseAdam on CUDA

Reviewed By: xw285cornell

Differential Revision: D20650225

fbshipit-source-id: 5f871e2f259e362b713c9281b4d94534453995cf
2020-07-31 10:47:29 -07:00
Priyanshu
6c251f74b2 replace black_list/blacklist with blocklist/block_list (#42089)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41734

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42089

Reviewed By: pbelevich

Differential Revision: D22794556

Pulled By: SplitInfinity

fbshipit-source-id: 4404845b6293b076b3c8cc02b135b20c91397a79
2020-07-29 16:26:02 -07:00
Xing Wang
27b03d62de [HT] Clear the device placement tag for the auto gen sum so that we could break the component for FC sharing the same input (#42219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42219

Introduce a new extra info that is tagged on the forward net for the operators sharing the same input. The effect is that the auto gen sum of gradient for the input will not follow the tag of the operator tags in the forward net. This allow more flexible device allocation.

Test Plan:
# unit test
`./buck-out/gen/caffe2/caffe2/python/core_gradients_test#binary.par -r  testMultiUseInputAutoGenSumDevice`

Reviewed By: xianjiec, boryiingsu

Differential Revision: D22609080

fbshipit-source-id: d558145e5eb36295580a70e1ee3a822504dd439a
2020-07-29 15:21:27 -07:00
Xiaomeng Yang
60f51542dc [Caffe2] Fix spatial_bn bug for computing running_var on CPU or on CUDA without CuDNN (#42151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42151

Previously our Caffe2 SpatialBN op impl was incorrect for computing running_var without unbias coefficent. Actually it should fail the test because the output will be different with CuDNN's output. However, our tests are too weak to find this bug. This diff fix all of them.

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:spatial_bn_op_test

Reviewed By: houseroad

Differential Revision: D22786127

fbshipit-source-id: db80becb67d60c44faae180c7e4257cb136a266d
2020-07-29 11:20:03 -07:00
Nikita Shulga
fd9205e14b Enable caffe2 tests for RocM jobs (#41604)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41604

Reviewed By: ezyang

Differential Revision: D22603703

Pulled By: malfet

fbshipit-source-id: 789ccf2bb79668a5a68006bb877b2d88fb569809
2020-07-28 14:21:42 -07:00
Nikita Shulga
48ae5945de Skip TestExtractPredictorNet if compiled without OpenCV (#42168)
Summary:
Found while trying to get RocM Caffe2 CI green

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42168

Reviewed By: seemethere

Differential Revision: D22791879

Pulled By: malfet

fbshipit-source-id: 8f7ef9711bdc5941b2836e4c8943bb95c72ef8af
2020-07-28 11:26:55 -07:00
Nikita Shulga
2f61aca17b Skip DataIO tests relying on LevelDB if compiled without it (#42169)
Summary:
Found while trying to get RocM Caffe2 job green

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42169

Reviewed By: seemethere

Differential Revision: D22791896

Pulled By: malfet

fbshipit-source-id: 9df6233876aec5ead056365499bab970aa7e8bdc
2020-07-28 10:18:26 -07:00
Jiyan Yang
c062cdbd90 Log the net if blob doesn't exist when setting output record (#41971)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41971

Reviewed By: wx1988

Differential Revision: D22490309

fbshipit-source-id: d967ee211b610f5523a307b5266b9fcb0277a21c
2020-07-27 19:13:50 -07:00
Lingyi Liu
d6f1346c37 Add a new op for converting the dense feature to sparse representation
Summary: we need this op to avoid the splicing of a dense tensor and then use the Mergesinglescaler op

Test Plan: integrated test with dper2

Differential Revision: D22677523

fbshipit-source-id: f4f9a1f06841b0906ec8cbb435482ae0a89e1721
2020-07-27 12:45:37 -07:00