Commit graph

27699 commits

Author SHA1 Message Date
Ivan Kobzarev
3852215170 [vulkan] jit passes for vulkan conv2 prepack and fuse with clamp (#39282)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39282

Test Plan: Imported from OSS

Differential Revision: D21962424

Pulled By: IvanKobzarev

fbshipit-source-id: 2d20e827d2c3836b7e6b443293377c68dc1ffa5a
2020-06-20 14:12:21 -07:00
Pritam Damania
f69460d0cb Add unit test to verify DDP + RPC correctness. (#40139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40139

This unit test runs the same set of operations locally and then with
DDP + RPC to verify correctness.
ghstack-source-id: 106287490

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed/:ddp_under_dist_autograd

I ran these to make sure I am workin on a clean git repo.

git submodule update --init --recursive

to get latest tensor pipe code, otherwise build will have error.

to record installed binaries and torch package wheels to system paths

with-proxy env BUILD_CAFFE2_OPS=0 USE_CUDA=0 USE_MKLDNN=0 USE_DISTRIBUTED=1 python setup.py install --record files.txt

remove binaries and torch package wheels from system paths.

xargs rm -rf < files.txt

build in develop mode

with-proxy env BUILD_CAFFE2_OPS=0 USE_CUDA=0 USE_MKLDNN=0 USE_DISTRIBUTED=1 python setup.py develop

pytest test/distributed/test_ddp_under_dist_autograd.py::TestDdpUnderDistAutograd -v

Differential Revision: D22084385

fbshipit-source-id: e1f57e86ceddd4c96920ed904898e1763b47e8f2
2020-06-20 13:13:32 -07:00
Vitaly Fedyunin
a47fb57957 Change memory format promotion rules of point wise operators. (#37968)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37968

Modify memory format promotion rules to avoid promoting when one of the input is ambiguous. New rules are:
 Ambiguous + Contiguous = Contiguous
 Ambiguous + Channels Last = Channels Last
 Contiguous + Ambiguous ( NC11 ) = Contiguous
 Contiguous + Channels Last = Contiguous ( + Warning )  Before this PR: Channels Last
 Channels Last + Contiguous = Channels Last ( + Warning )
 Channels Last + Ambiguous = Channels Last
 Bias + Channels Last = Channels Last
 Channels Last + Bias = Channels Last

Test Plan: Imported from OSS

Differential Revision: D21819573

Pulled By: VitalyFedyunin

fbshipit-source-id: 7381aad11720b2419fb37a6da6ff4f54009c6532
2020-06-20 10:33:32 -07:00
Ivan Kobzarev
c1dfc05cc9 [android][test_app][reland] test_app example linking to pytorch_android aar content (#40313)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40313

Test Plan: Imported from OSS

Differential Revision: D22147079

Pulled By: IvanKobzarev

fbshipit-source-id: c70a0a9dda8834376ed304a461318d4c6ef84582
2020-06-20 07:34:42 -07:00
Haixin Liu
4cbf87dc92 [PyTorch Numeric Suite] Add support for dynamic LSTM (#40065)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40065

Add support for dynamic LSTM of all three Numeric Suite APIs: compare_weights(), compare_model_stub() and compare_model_outputs().
ghstack-source-id: 106291782

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D22058275

fbshipit-source-id: 76cb42ce16b6b02b0b90f7582252756582660921
2020-06-20 07:00:13 -07:00
Raghuraman Krishnamoorthi
0079e429d6 Remove incorrect warning message on rounding mode (#40301)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40301

ghstack-source-id: 106258861

Test Plan: Fix warning message

Differential Revision: D22143261

fbshipit-source-id: 73a3b09ea82eb470c6702a413d1f984bbf38b3ea
2020-06-20 02:09:44 -07:00
Zafar
9da277c635 [quant][graphmodel] linear_relu (#40021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40021

This replaces #36889 due to significant merge conflicts

Test Plan: Imported from OSS

Differential Revision: D22087061

Pulled By: z-a-f

fbshipit-source-id: 6a65cdd3c0c0c957968a9d017902fb6d03b58150
2020-06-19 23:32:54 -07:00
Jerry Zhang
e04a611b91 [quant][graphmode] clang format changes (#40329)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40329

Test Plan: Imported from OSS

Differential Revision: D22149706

fbshipit-source-id: 3c07cb0c09a53a01fc69185943ddc409264a6ff5
2020-06-19 23:22:43 -07:00
Jerry Zhang
59ca1d31ca [quant][graphmode] docstrings for top level APIs (#40328)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40328

Test Plan: Imported from OSS

Differential Revision: D22149708

fbshipit-source-id: 63a1cd229d9e4668fba0ef3977e894cb8984318b
2020-06-19 22:20:23 -07:00
Jongsoo Park
7a837019a4 [caffe2] optimize 2/4-bit row-wise quantization (#387)
Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39985

avx2 optimized 2/4-bit row-wise quantization/dequantization in perfkernels.
This diff slightly change the numerics of quantization by multiplying with the inverse of scale instead of dividing with scale.

Test Plan:
In my devserver

for i in 2 4 8; do echo $i; buck run mode/opt :fused_rowwise_nbit_conversion_bench -- --bit-rate=$i; done

Before this diff
2-bit
        3.35394 ms.        100%. FloatToFused2BitRowwiseQuantized
4-bit
        3.60351 ms.        100%. FloatToFused4BitRowwiseQuantized
8-bit
       0.434467 ms.        100%. FloatToFused8BitRowwiseQuantized

After this diff

2-bit
       0.606386 ms.        100%. FloatToFused2BitRowwiseQuantized
4-bit
       0.446683 ms.        100%. FloatToFused4BitRowwiseQuantized
8-bit
         0.4349 ms.        100%. FloatToFused8BitRowwiseQuantized

Reviewed By: choudharydhruv, jianyuh

Differential Revision: D22033195

fbshipit-source-id: d3a219e47b8345268d90a160c9314ed0d5b71467
2020-06-19 21:28:31 -07:00
Ailing Zhang
cfe1c6ef9e Update XLAPreAutograd keys. (#40265)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40265

Differential Revision: D22137998

Pulled By: ailzhang

fbshipit-source-id: 41edac06f8aafa5d4c1dcefd5da81be6c9ac4a9c
2020-06-19 21:12:50 -07:00
lixinyu
5c133eb2db fix small typo in optim adamw (#40283)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40283

Test Plan: Imported from OSS

Differential Revision: D22138796

Pulled By: glaringlee

fbshipit-source-id: 2c3a35f7e539b43ee5abf8dbc10b95df5d62fccb
2020-06-19 19:10:17 -07:00
Wanchao Liang
4b028a8e07 [jit] support pad_sequence/pack_sequence (#39844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39844

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22026720

Pulled By: wanchaol

fbshipit-source-id: cc51ea77eff3689e319ec7e89a54c788646b5940
2020-06-19 19:03:14 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Xiang Gao
5555d210b1 Cleanup TensorIteratorDynamicCasting.h (#39839)
Summary:
std::complex, and thrust::complex has gone
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39839

Differential Revision: D22139528

Pulled By: ngimel

fbshipit-source-id: 535e8c137212338569c83c46ed6fd829934e4043
2020-06-19 18:17:50 -07:00
Jerry Zhang
b2f489dc57 [quant][graphmode] Rename graph mode quantization API to quantize_jit (#40212)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40212

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D22144745

fbshipit-source-id: 38a19b5afdddbbce262eea8ddf5b68458e6017b3
2020-06-19 18:13:37 -07:00
Hector Yuen
6d70d1574f rename the LayerNorm operator and add it to the replacement map (#40318)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40318

rename layernom fakefp16 to the right naming convention
add it to the map of replacement ops

this can be done even if the operator is not complete because we are blacklisting anyways

Test Plan: net_runner and inspected the log that replacement happened

Reviewed By: venkatacrc

Differential Revision: D22145900

fbshipit-source-id: f19794ec05234b877f7697ed8b05dd8f46606c47
2020-06-19 16:49:22 -07:00
Xiang Gao
fb17b05f33 Make dynamic casting case also benefit from unrolling (#34749)
Summary:
This is based on https://github.com/pytorch/pytorch/issues/34708, I didn't use stacked diff because is not very convenient for cherry-picking. Please review after https://github.com/pytorch/pytorch/issues/34708 merged.

**Legacy kernels are now completely gone. And the rewrite of GPU loops is done.**

Benchmark shows big improvements in performance on RTX 2080ti:
https://github.com/zasdfgbnm/things/blob/master/2020Q1/benchmark-unroll-with-dyn-casting.ipynb
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34749

Differential Revision: D22139597

Pulled By: ngimel

fbshipit-source-id: 5995744c339afee331f15ea2e483c6acf3ce0c62
2020-06-19 16:43:46 -07:00
Ilia Cherniavskii
4194456158 Add _enable_record_function python API (#40306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40306

Adding _enable_record_function

Test Plan: CI

Differential Revision: D22143026

fbshipit-source-id: dc466ad3303cb1d52a66aab74ba668e36bab5458
2020-06-19 16:08:00 -07:00
Pritam Damania
a80dd02a22 [Resubmit] Ensure NCCL_BLOCKING_WAIT=1 works for dist.barrier() (#40249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40249

Blocking wait didn't work for dist.barrier() since we performed a
cudaDeviceSynchronize() before we performed any of the timeout checks. As a
result, in case of failures/desync the barrier() call would get stuck on
cudaDeviceSynchrnonize() and would never return a timeout error to the user.

To fix this, I've moved the device synchronization after the timeout checks.
ghstack-source-id: 106250153
ghstack-source-id: 106250153

Test Plan: waitforbuildbot

Differential Revision: D22126152

fbshipit-source-id: d919a7a6507cca7111d8ad72e916777b986d0d67
2020-06-19 15:42:43 -07:00
Shen Li
314d645e05 Add a warning to mention that async_execution does not work with autograd profiler (#40309)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40309

Test Plan: Imported from OSS

Differential Revision: D22145130

Pulled By: mrshenli

fbshipit-source-id: d6f7250e53648d6939367f1ad4c9b898be00afed
2020-06-19 15:35:00 -07:00
Shen Li
5d0044389a Minor RPC doc improvements (#40305)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40305

Test Plan: Imported from OSS

Differential Revision: D22144304

Pulled By: mrshenli

fbshipit-source-id: 1c8a9648043eabaf909c6e4ae116672396a9f0f5
2020-06-19 15:34:58 -07:00
Shen Li
a9f0156271 Fix RRef to_here() docs (#40300)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40300

Test Plan: Imported from OSS

Differential Revision: D22143252

Pulled By: mrshenli

fbshipit-source-id: 85a5b7a7bab9ad29fe71064c927b059dd1ab39f9
2020-06-19 15:34:56 -07:00
Shen Li
caf0c286b8 Fix RPC API doc links (#40299)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40299

Test Plan: Imported from OSS

Differential Revision: D22143156

Pulled By: mrshenli

fbshipit-source-id: c11848ebfe8863d59509a0fbc042eed71a58e514
2020-06-19 15:34:53 -07:00
Shen Li
d6d579397d Improve docs for init_rpc (#40298)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40298

Test Plan: Imported from OSS

Differential Revision: D22143155

Pulled By: mrshenli

fbshipit-source-id: deadcc29eda157b401ca6a091c3ba17455acb6b5
2020-06-19 15:34:51 -07:00
Shen Li
3ca05500fa Improve RPC documents (#40296)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40296

1. Added a link to parameter server tutorial
2. Explained current states for TorchScript support

Test Plan: Imported from OSS

Differential Revision: D22142647

Pulled By: mrshenli

fbshipit-source-id: ffd697dd64a3aa874cf3f3488122ed805903370d
2020-06-19 15:34:49 -07:00
Shen Li
4463f59c2c Let torch.futures.wait_all re-throw errors (#40291)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40291

Test Plan: Imported from OSS

Differential Revision: D22141702

Pulled By: mrshenli

fbshipit-source-id: 50b5e5c687e87930aef3a50cc40839729a4eb9c6
2020-06-19 15:32:56 -07:00
Jiakai Liu
f92089b8ca [pytorch] tweak code analyzer script to handle new namespaces (#40276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40276

- add a couple new namespaces;
- handle the case where both contextual namespace and opreator namespace
  are set (BackendSelectRegister.cpp and #39401);
- improve error message;

Test Plan: Imported from OSS

Differential Revision: D22135686

Pulled By: ljk53

fbshipit-source-id: 14d359c93573349b8fe1e05d7e44d875295a5f6d
2020-06-19 14:54:21 -07:00
Nikita Shulga
6df97c20c2 Make test case precision property (#40057)
Summary:
Make `common_utils.TestCase.precision` a property, because it is overriden as such in `common_device_type`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40057

Differential Revision: D22138385

Pulled By: malfet

fbshipit-source-id: 0e7c14654bf60f18f585efc61f96fdd0af23346f
2020-06-19 14:24:55 -07:00
James Reed
c73095e78f Add note to serialization docs about zipfile format (#40288)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40288

Test Plan: Imported from OSS

Differential Revision: D22140324

Pulled By: jamesr66a

fbshipit-source-id: 01d7aa642ed2f4e4bdac4b7f3223bf4d7e62fd4d
2020-06-19 13:40:08 -07:00
Negin Raoof
73a156e81f [ONNX] Update pytorch/onnx docs for new export API args (#39802)
Summary:
Update pytorch/onnx docs for new export API args:
Use external data format and Training args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39802

Reviewed By: hl475

Differential Revision: D22139664

Pulled By: houseroad

fbshipit-source-id: 7d6dcf75129cb88987f8c37b7d9d48ca594c0f38
2020-06-19 13:38:47 -07:00
neginraoof
41865d8f19 [ONNX] Update black_listed_operators for opset 12 (#39414)
Summary:
Remove black_listed_operators for opset 12 as we now support these ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39414

Reviewed By: hl475

Differential Revision: D21915584

Pulled By: houseroad

fbshipit-source-id: 37ec7bdd2b5a845484535054026d6613d0921b7a
2020-06-19 13:33:25 -07:00
Hector Yuen
65f67bbe92 improvements to sls 4bit
Summary: enhance the sls test to reflect the shapes and values

Test Plan: ran sls tests on device and emulator

Reviewed By: amylittleyang

Differential Revision: D22094433

fbshipit-source-id: 610a79433ae6c58f626b5984a3d89d9e1bbf4668
2020-06-19 13:30:53 -07:00
Luca Wehrstedt
c3ce35e67b Update TensorPipe submodule
Summary:
This is to import a few features:
- a fix to a race condition happening in SHM's use of epoll
- a new XTH channel, that uses a memcpy to transfer between threads of the same process
- a new MPT channel, that chunks and multiplexes tensors over multiple transport event loops

Test Plan: Run in CircleCI

Reviewed By: patricklabatut

Differential Revision: D22140736

fbshipit-source-id: a3cee8a3839d98a42b8438844a9fd24fd85b2744
2020-06-19 13:22:06 -07:00
Jeff Daily
b48742322a move ROCm 3.5 thunk upgrade from build.sh into test.sh (#40286)
Summary:
https://github.com/pytorch/pytorch/issues/40181 incorrectly placed the thunk work-around into the build.sh scripts.  It needed to be in test.sh.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40286

Differential Revision: D22140366

Pulled By: xw285cornell

fbshipit-source-id: 2a3d73594d1963c8c80cd8c45d06f1c963b9cbee
2020-06-19 12:30:30 -07:00
Rohan Varma
ca0540a7eb Remove variable shadowing from tensorpipe lambda (#39126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39126
futureResponseMessage is shadowed in the pipeWrite lambda which
creates some confusion, since it is used in the initial error handling but then
a future of the same name is created when marking the future as completed. This
change removes this by getting rid of the futureResponseMessage capture,
instead capturing the message id. This change also makes it so that we don't
need to copy it into the lambda.
ghstack-source-id: 106211353

Test Plan: CI

Differential Revision: D22127398

fbshipit-source-id: c98a53b5630ce487461e4ca9cd72fbd34788298d
2020-06-19 12:25:42 -07:00
Ilia Cherniavskii
cdbf78fba0 Revert D22118945: [android] test_app example linking to pytorch_android aar content
Test Plan: revert-hammer

Differential Revision:
D22118945 (52a2adb3f4)

Original commit changeset: 31c54b49b1f2

fbshipit-source-id: 0c4929d4441572debbbc49f8674b9fc49b726599
2020-06-19 12:16:18 -07:00
Edmund Williams Jr
465138ec39 refactoring TestQuantizeScript (#39677)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39677

Test Plan:
Moved a test class suite between files, wanted to have same functionality (simple code refactor) so tested to make sure the test output was the same before/after the refactor.
Image below shows the output of TestGraphModePostTrainingStatic before refactor

{F239676498}

This image shows the output of TestQuantizeScript (renamed version that is in test_quantize_script.py instead of test_quantize.py)

{F239676509}

Differential Revision: D21940638

Pulled By: edmundw314

fbshipit-source-id: 54160a5151aadf3a34bdac2bcaeb52904e6653ed
2020-06-19 11:47:00 -07:00
Gemfield
3684dfafc2 Fix typos in RPC examples (#40280)
Summary:
There has a missing '=' in rpc_sync call in RPC example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40280

Differential Revision: D22137619

Pulled By: mrshenli

fbshipit-source-id: f4e4b85f68fd68d29834e199416176454b6bbcc2
2020-06-19 11:43:11 -07:00
Nikita Shulga
b670ff2d3a Add typing for _CudaStreamBase and _CudaEventBase classes (#40256)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40256

Differential Revision: D22139369

Pulled By: malfet

fbshipit-source-id: c7f4f8709700eb85d971ad504dd3552e311cb58d
2020-06-19 11:29:41 -07:00
Omkar Salpekar
52e4e3a9b8 NCCL Comment typo fix (#40242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40242

Comment Typo in ProcessGroupNCCL
ghstack-source-id: 106088379

Test Plan: CI

Differential Revision: D22099219

fbshipit-source-id: ddce91e640d4eea54e0698166c6276aeffedeb1e
2020-06-19 11:24:52 -07:00
Haixin Liu
d9c804ce22 [PyTorch Numeric Suite] Add support for dynamic quantization of linear module (#39024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39024

Add support for dynamic quantization of linear module.
ghstack-source-id: 106205450

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D21675971

fbshipit-source-id: c9562744dc59b61cf47f2787a934e6a5a53e12fd
2020-06-19 10:58:56 -07:00
Yinghai Lu
07e581d639 Remove useless name check for inputs (#4618)
Summary:
Pull Request resolved: https://github.com/pytorch/glow/pull/4618

`onnxInputNames_` originated from positional name binding. This is inherited from C2, where in C2 inputs are bound by position. So it's useless to check the name here as like as `onnxInputNames_` is filled. If should save cycles on string comparison.

Test Plan: run it.

Reviewed By: jackm321

Differential Revision: D22104338

fbshipit-source-id: 250463744aa37ed291aebd337e26d573048583ff
2020-06-19 10:05:26 -07:00
Gregory Chanan
96057c0080 Fix missing deprecation warning for Tensor.nonzero(). (#40187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40187

There were two issues:
1) The hand-written definition included an ambiguous default, which made the deprecated signature not selected.  This didn't match the handwritten torch.nonzero, now they do.
2) A parsing bug for empty argument lists meant the signature wasn't being marked as deprecated.

Test Plan: Imported from OSS

Differential Revision: D22118236

Pulled By: gchanan

fbshipit-source-id: a433ce9069fef28aea97cbd76f2adf5a285abd73
2020-06-19 09:24:48 -07:00
Kimish Patel
ece8ef2fc6 Run canonical graph optimizations in optimize_for_mobile. (#38840)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38840

JIT graph executor runs some canonical optimizations such as cse, dead code
elimination etc before constructing code that interpreter executes.
Since we do not have full JIT in lite interpreter any such graph optimizations
must happen AOT.
This diff applies such canonical optimizations on graph.

Test Plan: CI's test_mobile_optimizer.

Reviewed By: dreiss

Differential Revision: D21675855

fbshipit-source-id: 5dd898088ef8250103ccbbb6aa2bbce156a8d61d
2020-06-19 09:19:29 -07:00
Nikita Shulga
a11870b45d Revert D22118971: [android] gradle version update
Test Plan: revert-hammer

Differential Revision:
D22118971 (262ad8e6ab)

Original commit changeset: 566e45e8f6f7

fbshipit-source-id: 74cfec0c978b724d84460a6d0c98f97b389811f7
2020-06-19 08:48:21 -07:00
Edmund Williams Jr
b0324a97f5 _jit_pass_fold_convbn wrapped with fuse_conv_bn_script (#40224)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40224

Test Plan: Imported from OSS

Differential Revision: D22117111

Pulled By: edmundw314

fbshipit-source-id: 9252674bd770ba6669d50090849d9f9bc13edaa3
2020-06-19 08:19:40 -07:00
Alexander Mols
b7bfdcbe3e [caffe2/torch] Use logger in jit instantiator
Summary:
Previously the module would log some data using `print()`. This can be
a problem when used in contexts where the process expects to write data to
stdout itself. This diff changes the log statements to use `logger` instead.
This makes it similar to other log statements in the same module.

Test Plan:
Confirmed no weird test showed up when running:

buck test caffe2/test/distributed/nn/api:remote_module_fork

Differential Revision: D22136172

fbshipit-source-id: a3d144eba6c75925ed684981793c84b36eb45a5d
2020-06-19 07:49:15 -07:00
Luca Wehrstedt
2393bab036 [TensorPipe] Update documentation (#40222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40222

Mention the TensorPipe agent in the RPC docs and give users the information they need to choose which agent to use.
ghstack-source-id: 106225711

Test Plan: Export to GitHub, build locally and try out the docs.

Differential Revision: D22116494

fbshipit-source-id: 30703ba8410c40f64e785f60d71dfd9faa8de4a1
2020-06-19 04:26:49 -07:00
Lu Fang
8315bb2359 Back out "[pytorch][PR] [JIT] Infer NamedTuple type attributes of nn.Modules correctly" (#40270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40270

Original commit changeset: 1227e243ab94

D22082806 (1e03d603c6) broke the model generation of pyper models. We trace the namedtuple as input. To unblock the development of PyPer project, let's revert the diff first.

Sorry about the inconvenience, SplitInfinity
ghstack-source-id: 106217609

Test Plan: buck run dper3/dper3_models/experimental/pytorch/feed:feed_generation_script -- --model_files_dir=/tmp/

Reviewed By: alyssawangqq

Differential Revision: D22132960

fbshipit-source-id: ce9278c8462602a341e231ea890e46f74e743ddf
2020-06-19 02:58:31 -07:00