Commit graph

2868 commits

Author SHA1 Message Date
generatedunixname89002005307016
c4f50162be [typing] suppress errors in fbcode/caffe2 - batch 2
Test Plan: Sandcastle

Differential Revision: D27082725

fbshipit-source-id: a920b4eb62ff07d8e80fa2b9e3fd340cb44b689f
2021-03-16 16:45:41 -07:00
Chester Liu
f6df18f6ca Clean up future imports for Python 2 (#53349)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53349

Reviewed By: malfet

Differential Revision: D27039089

Pulled By: bugra

fbshipit-source-id: 8063dc184248604506a8dbb1bcb73da8ec85bb18
2021-03-14 15:56:13 -07:00
Adam Simpkins
7e5ffbfa94 [caffe2] add a SerializationOptions field for the save operator (#53402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53402

Add an `options` field to the `Save` operator which accepts options for how to
serialize different blobs.  At the moment this simply allows controlling the
existing `chunk_size` behavior, but in the future we can add other options,
such as the ability to control compression settings or other serialization
formats.
ghstack-source-id: 123567034

Test Plan:
Added a new test to `load_save_test.py` that passes in options and verifies
that blobs were serialized with the expected number of chunks.

  buck test caffe2/caffe2:caffe2_test_cpu \
    caffe2/caffe2/core:serialization_test \
    caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26502577

fbshipit-source-id: 6e302e530bb96990517c2e35c505db7f14a56284
2021-03-11 13:02:58 -08:00
Adam Simpkins
023948e6d7 [caffe2] update load_save_test.py to also verify the chunking behavior (#53401)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53401

This is a reland of D26641599 (cd9ac54ea7) after rebasing onto D26802576 (f595ba1bae).

Add some small utility functions to read the blob names back from the minidb
file so that we can verify how many chunks were written for each blob.
ghstack-source-id: 123567033

Test Plan: buck test caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26853942

fbshipit-source-id: 0b45078fdd279f547752c8fdb771e296374a00da
2021-03-10 15:29:36 -08:00
Giuseppe Ottaviano
0ca029b22d [caffe2] Fix DBFileReader (#53498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53498

This code depended on `Blobs()` being returned in sorted order:

https://www.internalfb.com/intern/diffusion/FBS/browsefile/master/fbcode/caffe2/caffe2/python/db_file_reader.py?commit=472774e7f507e124392491800d9654e01269cbaf&lines=89-91

But D26504408 (69bb0e0285) changed the underlying storage to a hashmap, so now the blobs are returned in arbitrary order (Note that `Blobs()` returns also non-local blobs, and for those there was already no guarantee of ordering).

So we need to explicitly sort the result.

Test Plan:
```
$ buck test dper3/dper3/toolkit/tests:lime_test
$ buck test //dper3/dper3/toolkit/tests:model_insight_test
```
Pass after this diff.

Differential Revision: D26879502

fbshipit-source-id: d76113f8780544af1d97ec0a818fb21cc767f2bf
2021-03-08 08:34:39 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Nikita Shulga
68810c1836 Delete test_rand_quantization (#53234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53234

Test has been permanently skipped since Nov 2019, see https://github.com/pytorch/pytorch/pull/29463

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D26802660

fbshipit-source-id: ea66be1afd4d7cfbe692594df5d9dd8c29bc5d23
2021-03-03 20:59:00 -08:00
Natalia Gimelshein
69b2d5c7c3 Revert D26641599: [caffe2] update load_save_test.py to also verify the chunking behavior
Test Plan: revert-hammer

Differential Revision:
D26641599 (cd9ac54ea7)

Original commit changeset: bccb0af157d8

fbshipit-source-id: 9fe35382876d19aefd16496bf8f920e12aa6f169
2021-02-25 21:30:36 -08:00
Adam Simpkins
cd9ac54ea7 [caffe2] update load_save_test.py to also verify the chunking behavior
Summary:
Add some small utility functions to read the blob names back from the minidb
file so that we can verify how many chunks were written for each blob.

Test Plan: buck test caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26641599

fbshipit-source-id: bccb0af157d85e585e95bc7be61c4584fba3cb04
2021-02-25 20:24:06 -08:00
Adam Simpkins
e2afb269b8 [caffe2] add a Python test for SaveOp chunking
Summary:
Add a test in `load_save_test.py` that passes in a chunk_size parameter,
to ensure that we exercise the logic that passes the chunk size to the C++
serialization code.

Test Plan:
Ran the tests with the vlog level set to 3 and manually verified the log
messages showed that we were serializing in the expected chunks.
There are existing C++ tests that confirm chunking behavior works as expected
in the pure C++ code.

Reviewed By: mraway

Differential Revision: D26502578

fbshipit-source-id: cd0074f2358da81c68b0fed2c2a94818d83a957d
2021-02-23 11:52:13 -08:00
Yinghai Lu
f4c33edb45 Add onnxifi interface for set/get options (#52388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52388

Pull Request resolved: https://github.com/pytorch/glow/pull/5364

This allows us to change global variables through onnxifi calls. And add python bindings along with it. Note that we supply a dummy backend_id as it's not needed by glow due to setting being global.

#codemod

Test Plan:
```
buck test mode/dev //glow/fb/test:test_onnxifi_optionnnpi
```

Reviewed By: jfix71, khabinov

Differential Revision: D26481652

fbshipit-source-id: 19b8201c77f653cf7d93ad68760aa7fb5ec45ff4
2021-02-18 20:12:34 -08:00
Adam Simpkins
f7aa88b400 [caffe2] Explicitly define all DataTypes in python/core.py (#51768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51768

This updates python/core.py to explicitly define all of the `DataType`
values rather than dynamically defining them at runtime from the
`caffe2_pb2` values.

This allows type checkers like Pyre and Mypy to see the members of the
`DataType` class.  Otherwise the type checkers report errors such as
`"core.DataType" has no attribute "INT64"`.

This code does keep a run-time check that all of the data types defined
by `caffe2_pb2.proto` are defined correctly in this file.  This way if
someone does add a new type to `caffe2_pb2.proto` it should be very
quickly apparent that this file needs to be updated and kept in sync.
ghstack-source-id: 121936201

Test Plan:
Confirmed that various caffe2/python tests still pass.
Verified that this allows many `pyre-fixme` comments to be removed in
downstream projects, and that Pyre is still clean for these projects.

Reviewed By: jeffdunn

Differential Revision: D26271725

Pulled By: simpkins

fbshipit-source-id: f9e95795de60aba67d7d3872d0c141ed82ba8e39
2021-02-17 20:54:17 -08:00
Adam Simpkins
b9f051db9f Add type hints for the _import_c_extension module (#51767)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51767

The `_import_c_extension.py` finds the right C extension library to use,
and then simply re-exports all of the symbols that it defines.

This adds a `_import_c_extension.pyi` file with type hints to let type
checkers like Pyre and Mypy know the names of the symbols that will be
re-exported from the C extension.

This does not define all of the symbols provided by the C extension,
but does define all of the symbols necessary to make type checkers happy
about other code in the `caffe2/python` directory.
ghstack-source-id: 121916324

Test Plan:
Was able to have Pyre successfully type check the `caffe2/python`
directory with this stub file plus a few other changes.

Confirmed that all of the dependent projects affected by this report no new
pyre issues in sandcastle.

Ran `python test/test_type_hints.py` in the PyTorch github repository and
confirmed it also passes.

Differential Revision: D26271726

Pulled By: simpkins

fbshipit-source-id: 6dbadcf02e0b2cc44a9e3cdabe9291c1250959b4
2021-02-17 17:37:47 -08:00
Junjie Yang
0dc0cb1d8d Enable FP16 sparse regularizer
Summary: Previously there was no regularizer implemented for fp16 sparse features. Add regularizer support here using the Float16SparseNormalize implemented in this stack.

Test Plan:
buck test //caffe2/caffe2/python:regularizer_test

In f248648705, we can see there is the operator `Float16SparseNormalize`.

{F356635445}

Reviewed By: bigrabithong

Differential Revision: D24042567

fbshipit-source-id: 5e0065f8c10b8748daffa8a54a6bf8f461460b18
2021-02-12 12:29:32 -08:00
Adam Simpkins
fa0a049d4e Add a make_tempdir() utility function to the TestCase base class (#51762)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51762

Update test_util.py to add a `make_tempdir()` function to the `TestCase`
class.  The main advantage of this function is that the temporary
directory will be automatically cleaned up when the test case finishes,
so that test case does not need to worry about manually cleaning up this
directory.

This also prefixes the directory name with `caffe2_test.` so that it is
more obvious where the temporary directories came from if they are ever
left behind after a crashed or killed test process.

This updates the tests in `operator_test/load_save_test.py` to use this
new function, so they no longer have to perform their own manual cleanup
in each test.

Test Plan: python caffe2/python/operator_test/load_save_test.py

Reviewed By: mraway

Differential Revision: D26271178

Pulled By: simpkins

fbshipit-source-id: 51175eefed39d65c03484482e84923e5f39a4768
2021-02-12 10:56:01 -08:00
Adam Simpkins
db6e0c7c0e Replace a platform.system() check with sys.platform (#51766)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51766

Check if we are on Windows using `sys.platform` rather than
`platform.system()`.  Even though `platform.system()` is more modern, it
has a few downsides: this performs a runtime check of the platform type,
which has non-zero overhead.  On Linux it actually executes the separate
`/bin/uname` process.  On the other hand `sys.platform` is determined
when the Python interpreter is compiled, so this is a simple hard-coded
string.

Because it is a runtime check, `platform.system()` checks also cannot be
analyzed by static type checkers like Pyre and Mypy.  These type
checkers do understand `sys.platform` checks, and can correctly avoid
complaining about code paths that use platform-specific modules and
functions.  e.g., they can avoid complaining about `ctypes.WinDLL` not
existing on Linux if its use is guarded by a `sys.platform` check.
ghstack-source-id: 121107705

Test Plan: Ran tests on Linux, and will check CI test results.

Reviewed By: mraway

Differential Revision: D26271724

Pulled By: simpkins

fbshipit-source-id: b86e427e4ceec0324464ba4bc88b95d5813172d0
2021-02-11 20:09:14 -08:00
Roy, Arindam
517185f946 test_lc_1d: Increase deadline to 5 seconds (#52013)
Summary:
Increasing the deadline as to avoid
flakiness of the test on ROCM.

Signed-off-by: Roy, Arindam <rarindam@gmail.com>

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52013

Reviewed By: albanD

Differential Revision: D26360209

Pulled By: mrshenli

fbshipit-source-id: 1ddc7062c5ff7c980233d22844073de9fb7dcbb3
2021-02-11 11:59:56 -08:00
Adam Simpkins
81b9aa743b [pytorch] Update caffe2/python to eliminate Pyre errors (#52083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52083

This makes minor fixes in `caffe2/python` to address all errors currently
reported by Pyre.

I update the code to fix errors when doing so looked simple and safe,
and added `pyre-fixme` comments in other places.
ghstack-source-id: 121109695

Test Plan: Confirmed that Pyre no longer reports errors under `caffe2/python`

Differential Revision: D26272279

fbshipit-source-id: b1eb19d323b613f23280ce9c71e800e874ca1162
2021-02-11 11:04:59 -08:00
Adam Simpkins
c4eb22009e Drop some Python 2 compatibility code (#51769)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51769

Remove some Python 2 compatibility code that otherwise causes errors to
be reported from static type checkers.

Static type checkers complain that the old Python 2 modules and
functions referenced by this code do not exist.  Given that Python 2
support is entirely deprecated now we can simply remove the
compatibility code.
ghstack-source-id: 121313191

Test Plan:
Was able to get Pyre to successfully type check the `caffe2/python`
directory with this and some other changes.

Reviewed By: Tianshu-Bao

Differential Revision: D26271723

Pulled By: simpkins

fbshipit-source-id: fec8a09466be6867388832380480aafd36616aa1
2021-02-11 11:02:33 -08:00
cyy
39aa3db62b use make_shared and make_unique and clean unneeded code (#51829)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51829

Reviewed By: izdeby

Differential Revision: D26306098

Pulled By: smessmer

fbshipit-source-id: 4f6c0469c68f044c0bfe0925fcf7b030a25d15e2
2021-02-10 21:38:43 -08:00
Andrey Malevich
7e54a64828 [C2] Add shape inference logic for ColwiseMax operator. (#51914)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51914

As desc.

Test Plan: Unit-test.

Reviewed By: intermilan

Differential Revision: D26299115

fbshipit-source-id: 9c80236f843e907476da1747dcd623c85147fa90
2021-02-09 14:12:07 -08:00
Rong Rong (AI Infra)
50c9c08203 Enable GPU/RE tags for caffe2/caffe2/python/TARGETS
Summary: Moving caffe2_core_gpu_python contbuild to use GPU/RE

Test Plan: CI

Reviewed By: malfet

Differential Revision: D26261826

fbshipit-source-id: a6f8c7bd8368c1cb69499ea0ea7d5add0956a7ad
2021-02-05 13:52:48 -08:00
pbialecki
7b85adf20f Add back pycuda.autoinit to test_pt_onnx_trt (#51106)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51105 by adding back the `import pycuda.autoinit`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51106

Reviewed By: mingzhe09088

Differential Revision: D26086808

Pulled By: heitorschueroff

fbshipit-source-id: 88d98796c87a44cedaa1f6666e9f71a424293641
2021-01-27 07:10:11 -08:00
Arindam Roy
09b896261c Skip test_lc_1d for ROCM (#50964)
Summary:
The test is flaky on ROCM when deadline is set to 1 second. This is affecting builds as it is failing randomly.
Disabling for now.

Signed-off-by: Arindam Roy <rarindam@gmail.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50964

Reviewed By: houseroad

Differential Revision: D26049370

Pulled By: BIT-silence

fbshipit-source-id: 22337590a8896ad75f1281e56fbbeae897f5c3b2
2021-01-25 11:43:37 -08:00
Lu Fang
f32b10e564 [BE] Fix the broken test caffe2/caffe2/python:lazy_dyndep_test - test_allcompare (#50696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50696

set no deadline for test_alklcompare

Test Plan: buck test mode/dev //caffe2/caffe2/python:lazy_dyndep_test -- --exact 'caffe2/caffe2/python:lazy_dyndep_test - test_allcompare (caffe2.caffe2.python.lazy_dyndep_test.TestLazyDynDepAllCompare)' --run-disabled

Reviewed By: hl475

Differential Revision: D25947800

fbshipit-source-id: d2043f97128e257ef06ebca9b68262bb1c0c5e6b
2021-01-18 16:21:06 -08:00
Lu Fang
1fdc35da2c [BE] Fix the broken test -- caffe2/caffe2/python:hypothesis_test - test_recurrent (#50668)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50668

GPU initialization sometimes is slow

Test Plan: buck test mode/opt //caffe2/caffe2/python:hypothesis_test -- --exact 'caffe2/caffe2/python:hypothesis_test - test_recurrent (caffe2.caffe2.python.hypothesis_test.TestOperators)' --run-disabled

Reviewed By: hl475

Differential Revision: D25939037

fbshipit-source-id: 832700cf42ece848cda66dd629a06ecda207f086
2021-01-17 21:21:38 -08:00
Zhijing Li
05542f6222 EMA op (#50393)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50393

Exponential Moving Average

Usage:

add ema_options in adagrad optimizer. For details, plz refer to the test workflow setting.

if ema_end == -1, it means ema will never end.

Test Plan:
buck test caffe2/caffe2/fb/optimizers:ema_op_optimizer_test

buck test caffe2/caffe2/fb/optimizers:ema_op_test

f240459719

Differential Revision: D25416056

fbshipit-source-id: a25e676a364969e3be2bc47750011c812fc3a62f
2021-01-13 08:58:01 -08:00
Hugo van Kemenade
473e78c0fa Remove redundant code for unsupported Python versions (#49486)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49486

Remove code for Python 3.5 and lower.

There's more that can be removed/modernised, but sticking mainly to redundant version checks here, to keep the diff/PR smaller.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46579

Reviewed By: zou3519

Differential Revision: D24453571

Pulled By: ezyang

fbshipit-source-id: c2cfcf05d6c5f65df64d89c331692c9aec09248e
2021-01-06 12:45:46 -08:00
Richard Barnes
9945fd7253 Drop unused imports from caffe2/python (#49980)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49980

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727359

fbshipit-source-id: c4f60005b10546423dc093d31d46deb418352286
2021-01-05 13:17:46 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
skyline75489
46b83212d1 Remove unused six code for Python 2/3 compatibility (#48077)
Summary:
This is basically a reborn version of https://github.com/pytorch/pytorch/issues/45254 .

Ref: https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48077

Reviewed By: ngimel

Differential Revision: D25687042

Pulled By: bugra

fbshipit-source-id: 05f20a6f3c5212f73d0b1505b493b720e6cf74e5
2020-12-22 18:07:08 -08:00
Taylor Robie
faf6032945 Remove deadlines for Caffe2 hypothesis_test when running on GPU. (#49591)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49591

A bunch of these tests are marked flaky, and have been since time immemorial. (Read: as far back as Buck will build.) However closer inspection reveals that they fail if and only if run on a GPU worker. What seems to be going on is that there are more jobs than GPUs, so the contention causes waits which registers as timeouts on the test.

This diff is kind of hacky, but it basically just drops deadlines if a GPU is present. Because Caffe2 is going away I'm not too terribly concerned about a beautiful solution, but we may as well keep some test coverage if it's easy.

CC Sebastian, Ilia, Min, and Hongzheng who also have tasks for what seems to be the same flakiness.

Test Plan: Turn the tests back on and see if they fall over. (The failure repros reliably on an OnDemand GPU and is fixed by this change, so it's not really just a hail Mary.)

Reviewed By: ngimel

Differential Revision: D25632981

fbshipit-source-id: 43dcce416fea916ba91f891e9e5b59b2c11cca1a
2020-12-18 10:00:24 -08:00
Andrey Malevich
f5a26a554b [C2] Revive unsafe CoalesceOp (#49402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49402

In cases of NCCLAllReduce operations there could be non-trivial overhead for
launching cooperative kernels (especially in case of async execution of
different parts of the model). This diff is reviving this operator to make it
possible to fuse multiple operations into a single kernel.

Test Plan:
Unit-test.
Used in a later diff.

Reviewed By: xianjiec

Differential Revision: D25531206

fbshipit-source-id: 64b1c161233a726f9e2868f1059316e42a8ea1fc
2020-12-17 04:31:29 -08:00
Andrey Malevich
46debe7f23 [DPER] Introduce barrier operation to force synchronization of threads in async execution (#49322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49322

In some cases async execution might loose dependencies (Alias like ops) or produce suboptimal scheduling when there is an option which parts to schedule first. Example of the later behavior can happen in ModelParallel training where copy can get lower priority compared to the rest of the execution on the given GPU, which will caused other GPUs to starve.

This operator allows to address these issues by introducing extra explicit dependencies between ops.

Test Plan:
Unit-test/
E2E testing in the future diffs.

Reviewed By: xianjiec

Differential Revision: D24933471

fbshipit-source-id: 1668994c7856d73926cde022378a99e1e8db3567
2020-12-15 16:13:42 -08:00
Newsha Ardalani
0fb58d76a1 Support ArgMin in c2_pt_converter
Summary:
+ Add ArgMin support to Caffe2 to PyTorch converter
+ Using hypothesis to parameterize different conditions for test

Test Plan: buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test

Reviewed By: houseroad

Differential Revision: D25016203

fbshipit-source-id: 94489fcf1ed3183ec96f9796a5b4fb348fbde5bc
2020-12-05 16:35:34 -08:00
Rahul Manghwani
142b21fd44 Add SparseLengthsSum4BitRowwiseSparse in c2_pt_converter (#48240)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48240

Adds the support for converting the SparseLengthsSum4BitRowwiseSparse operator from caffe2 to pytorch as a part of c2_pt_converter

Test Plan:
Added a unit tested

buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test

Tests Passed :
https://our.intern.facebook.com/intern/testinfra/testrun/2251799856412296

Reviewed By: houseroad

Differential Revision: D25067833

fbshipit-source-id: 45cbc331ca35bee27e083714e65a1e87a2a2d2e0
2020-12-04 14:16:25 -08:00
Tristan Rice
dc7d8a889e caffe2: refactor context to allow being typed (#48340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48340

This changes the context managed classes from using a decorator to define them to using inheritance. Inheritance allows the python static type checking to work correctly.

```
context.define_context()
class Bar(object): ...

context.define_context(allow_default=True)
class Foo(object): ...
```

becomes
```
class Foo(context.Managed): ...

class Bar(context.DefaultManaged): ...
```

Behavior differences:
* arg_name has been removed since it's not used anywhere
* classes need to call `super()` in `__enter__/__exit__` methods if they override (none do)

This also defines a context.pyi file to add types for python3. python2 support should not be affected

Test Plan:
ci

  buck test //caffe2/caffe2/python:context_test //caffe2/caffe2/python:checkpoint_test

Reviewed By: dongyuzheng

Differential Revision: D25133469

fbshipit-source-id: 16368bf723eeb6ce3308d6827f5ac5e955b4e29a
2020-11-30 18:31:14 -08:00
Frank Seide
29f0e1e2ce Fused8BitRowwiseQuantizedToFloat operator support (#48407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48407

T79817692: Fused8BitRowwiseQuantizedToFloat operator support for c2_pt_converter.

Also refactored some repeated code from the existing test functions. (Initial commit only has refactoring.)

Test Plan: buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test

Reviewed By: bugra

Differential Revision: D25069936

fbshipit-source-id: 72f6a845a1b4639b9542c6b230c8cd74b06bc5a0
2020-11-30 17:11:39 -08:00
Xiaodong Wang
d386d3323f [dper] supress excessive msg (#48404)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48404

On bento this is printing a lot of msgs like (see N408483 if you're an internal user)
```
W1123 120952.322 schema.py:811] Scalar should be considered immutable. Only call Scalar.set() on newly created Scalar with unsafe=True. This will become an error soon.
```
And it's ignoring the log level I set at global level. Removing this line unless this is super important.

Test Plan: build a local dper package and verify

Differential Revision: D25163808

fbshipit-source-id: 338d01c82b4e67269328bbeafc088987c4cbac75
2020-11-30 14:55:52 -08:00
shubhambhokare1
bdf360f9f2 [ONNX] Update onnx submodule (#47366)
Summary:
Update onnx submodule to 1.8 release

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47366

Reviewed By: hl475

Differential Revision: D24968733

Pulled By: houseroad

fbshipit-source-id: 2f0a3436ab3c9380ed8ff0887a483743c1209721
2020-11-30 00:05:46 -08:00
Tristan Rice
6eaf1e358c caffe2/core.Net: is_external_input rebuild lookup tables when necessary
Summary: is_external_input doesn't check if the lookup tables are valid. Calling .Proto() should invalidate all lookup tables and have them rebuilt on call to any methods depending on them. This adds this check to is_external_input.

Test Plan: internal unit tests

Reviewed By: dzhulgakov, esqu1

Differential Revision: D25100464

fbshipit-source-id: d792dec7e5aa9ffeafda88350e05cb757f4c4831
2020-11-20 10:53:24 -08:00
Xiaomeng Yang
2039ff3fbb [Caffe2] Optimize MishOp on CPU (#48212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48212

Optimize MishOp on CPU

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:activation_ops_test -- "mish"

Reviewed By: houseroad

Differential Revision: D25071304

fbshipit-source-id: fe94bfab512188d60412d66962983eff4f37bc07
2020-11-19 14:17:27 -08:00
Scott Wolchok
4c9eb57914 [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47023

DeviceType pretty clearly only needs 1 byte. DeviceIndex only needs 1 byte given that machines don't have anywhere near 255 GPUs in them as far as I know.
ghstack-source-id: 116901430

Test Plan: Existing tests, added assertion to catch if my assumption about DeviceIndex is incorrect

Reviewed By: dzhulgakov

Differential Revision: D24605460

fbshipit-source-id: 7c9a89027fcf8eebd623b7cdbf6302162c981cd2
2020-11-18 19:39:40 -08:00
Tristan Rice
b10d6c6089 [caffe2] cache NextName indexes for faster name generation (#47768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47768

This stores the next ID for a given NextName(prefix, output_id) so repeated calls to NextName are significantly faster. This accounts for ~65% of time spent for large models.

Test Plan:
buck test //caffe2/caffe2/python/...

will launch canary job before landing to ensure no regressions + confirm speedup

Reviewed By: dzhulgakov

Differential Revision: D24876961

fbshipit-source-id: 668d73060d800513bc72d7cd405a47d15c4acc34
2020-11-17 12:24:00 -08:00
Ankur Singla
549ef1d668 [caffe][memonger] Extend operator schema check to dag memonger (#48021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48021

Extending operator schema check for simple memonger to dag memonger as well. As part of this a fix is being made to handle inplace ops (having at least one output name same as input blob). Earlier all the output blobs from ops were being treated as shareable but it failed assertion of external input blobs with the same name not allowed to share.

Test Plan: Added corresponding unit tests

Reviewed By: hlu1

Differential Revision: D24968862

fbshipit-source-id: b6679a388a82b0d68f65ade64b85560354aaa3ef
2020-11-16 19:17:55 -08:00
Ankur Singla
f743b5639a [caffe2][memonger] Add support for distributed inference predict nets in DAG memonger (#47718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47718

Distributed Inference splits a predict net into multiple parts, part0 being the main part which contains ops to make remote calls to other parts. part0 predict net may contain AsyncIf ops to optimize rpc call usage. AsyncIf ops have internal nets which may refer to memongered blobs. This change handles AsyncIf ops to update internal nets to refer to memongered blobs.

As part of this change, I am also updating dag memonger traversal to always start from root op, i.e. ops with 0 in degree. Earlier logic will start traversing ops based on input head blobs and if one of the head inputs is getting used in a non-root op which gets visited before its parent, the traversal will throwing assertion error here: https://fburl.com/diffusion/ob110s9z . Almost for all the distributed inference part0 nets, it was throwing this assertion error.

Test Plan: Added corresponding tests in memonger_test.py .  Could not find unit tests in c++ version of memonger.

Reviewed By: hlu1

Differential Revision: D24872010

fbshipit-source-id: 1dc99b2fb52b2bc692fa4fc0aff6b7e4c5e4f5b0
2020-11-13 14:12:07 -08:00
Jonathan Kwok
a3e08e5344 Support ReduceSum in c2_pt_converter (#47889)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47889

Adds support for converting the [caffe2 ReduceSum](https://caffe2.ai/docs/operators-catalogue#reducesum) operator to torch.
ghstack-source-id: 116580127

Test Plan:
buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test : [results](https://our.intern.facebook.com/intern/testinfra/testrun/6755399466095119)

    ✓ ListingSuccess: caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test - main (60.273)
    ✓ Pass: caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test - test_sub_op (caffe2.torch.fb.model_transform.c2_convert.c2_pt_converter_test.C2PTConverterTest) (101.119)
    ✓ Pass: caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test - test_layer_norm_conversion (caffe2.torch.fb.model_transform.c2_convert.c2_pt_converter_test.C2PTConverterTest) (101.404)
    ✓ Pass: caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test - test_local_model_conversion (caffe2.torch.fb.model_transform.c2_convert.c2_pt_converter_test.C2PTConverterTest) (101.966)
    ✓ Pass: caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test - test_reduce_sum (caffe2.torch.fb.model_transform.c2_convert.c2_pt_converter_test.C2PTConverterTest) (114.896)

Reviewed By: bugra

Differential Revision: D24925318

fbshipit-source-id: 3f3b791eff1b03e8f5adee744560fe8bc811c659
2020-11-13 12:02:58 -08:00
Gary Zheng
f1babb00f0 [caffe2] Fix ListWithEvicted _pprint_impl wrongly printing _evicted_values (#47881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47881

ListWithEvicted's _pprint_impl was accidentally printing _items before this change.

Reviewed By: dzhulgakov

Differential Revision: D24928521

fbshipit-source-id: 0d7940719b4a27defbaae3b99af104d7fe7b5144
2020-11-13 09:23:10 -08:00
Alberto Alfarano
59e96c55f7 Support MatMul in c2_pt_converter
Summary: Added the MatMul operator for caffe2

Test Plan: buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test

Reviewed By: bugra

Differential Revision: D24920937

fbshipit-source-id: 7ba09ba0439cb9bd15d6a41fd8ff1a86d8d11437
2020-11-12 20:56:58 -08:00
Peiyao Zhou
4078f44668 [TB][embedding supporting] Modify histogram to accept multipy types to skip Castop and avoid OOMing in Castop
Summary: To support min/max/mean/std, SummarizeOp need to skip size checking (similar to the LpNorm error mentioned above) and accept multiple types

Test Plan:
unit test:
`buck test //caffe2/caffe2/fb/tensorboard/tests:tensorboard_accumulate_histogram_op_test`

https://our.intern.facebook.com/intern/testinfra/testrun/1407375057859572

`buck test //caffe2/caffe2/fb/tensorboard/tests:tensorboard_accumulate_histogram_op_test --stress-runs 1000`

https://our.intern.facebook.com/intern/testinfra/testrun/2533274832166362

Reviewed By: cryptopic

Differential Revision: D24605507

fbshipit-source-id: fa08372d7c9970083c38abd432d4c86e84fb10e0
2020-11-11 12:03:54 -08:00