Commit graph

39098 commits

Author SHA1 Message Date
Zhengxu Chen
e62189ad69 [jit] Better checking for overload function declarations. (#59956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59956

Issue #50175. Basically two things need to be checked and are lacking currently:
1. Overload declarations should always have a single `pass` statement as the body.
2. There should be always an implementation provided for decls which doesn't
   have the torch.jit._overload decorator. So in this case we need to check
   whether we are actually compiling a function body with decorator ahead.

Test Plan:
python test/test_jit.py TestScript.test_function_overloads

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D29106555

fbshipit-source-id: 2d9d7df2fb51ab6db0e1b726f9644e4cfbf733d6
2021-08-05 14:21:48 -07:00
Will Constable
63fa53d37a Add batched model to torchdeploy examples (#62836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62836

Used for upcoming diff that adds support for batching to torchdeploy

Test Plan: Models are used by later diffs, but generation script is verified by CI now and locally.

Reviewed By: gunchu

Differential Revision: D30135938

fbshipit-source-id: 566a32a3ede56833e41712025e9d47191dfc5f39
2021-08-05 14:01:40 -07:00
mattip
c8eda919a4 test, fix sparse * dense exceptions and corner case (#61723)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/59916

This fixes two problems with sparse multiplication
- 0d-dense * sparse was creating a non-sparse output and failing.
- dense * sparse or sparse * dense is not supported, but would emit an unhelpful error message
<details>
<summary> unhelpful error message </summary>
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NotImplementedError: Could not run 'aten::_nnz' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_nnz' is only available for these backends: [SparseCPU, SparseCUDA, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Named, Conjugate, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

SparseCPU: registered at aten/src/ATen/RegisterSparseCPU.cpp:961 [kernel]
SparseCUDA: registered at aten/src/ATen/RegisterSparseCUDA.cpp:1092 [kernel]
SparseCsrCPU: registered at aten/src/ATen/RegisterSparseCsrCPU.cpp:202 [kernel]
SparseCsrCUDA: registered at aten/src/ATen/RegisterSparseCsrCUDA.cpp:229 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:38 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:118 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:60 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:11202 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:10254 [kernel]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:446 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:285 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
</details>

Also added tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61723

Reviewed By: ezyang

Differential Revision: D29962639

Pulled By: cpuhrsch

fbshipit-source-id: 5455680ddfa91d5cc9925174d0fd3107c40f5b06
2021-08-05 11:27:12 -07:00
Peter Lin
8d7786ada6 Simplify hardswish ONNX export graph. (#60080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58301

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60080

Reviewed By: suo

Differential Revision: D30002939

Pulled By: SplitInfinity

fbshipit-source-id: 8b4ca6f62d51b72e9d86534592e3c82ed6608c9d
2021-08-05 11:15:14 -07:00
Philip Meier
7630f407cc add OpInfo for torch.nn.functional.grid_sample (#62311)
Summary:
Addresses facebookresearch/functorch#78.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62311

Reviewed By: malfet

Differential Revision: D30013388

Pulled By: zou3519

fbshipit-source-id: 0887ae9935923d928bfeb59054afe1aab954b40b
2021-08-05 10:43:54 -07:00
Kushashwa Ravi Shrimali
5dbcd5638b OpInfo for nn.functional.avg_pool2d (#62455)
Summary:
Please see https://github.com/facebookresearch/functorch/issues/78

cc: mruberry zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62455

Reviewed By: soulitzer

Differential Revision: D30096146

Pulled By: heitorschueroff

fbshipit-source-id: ef09abee9baa5a9aab403201226d1d9db5af100a
2021-08-05 10:28:52 -07:00
Eddie Yan
878943c64f Preserve memory layout when aten batchnorm is used (#62773)
Summary:
https://github.com/pytorch/pytorch/issues/62594

CC cpuhrsch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62773

Reviewed By: H-Huang

Differential Revision: D30118658

Pulled By: cpuhrsch

fbshipit-source-id: bce9e92f5f8710c876a33cccbd1625155496ddea
2021-08-05 10:21:44 -07:00
Karen Zhou
d45291613c [pruner] generalize bias hook for conv2d (#62430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62430

The bias hook is a forward hook that is part of the pruning parametrization; it is attached after the activation reconstruction forward hook, so adding the bias occurs after zeros are reinserted to the pruned activation.

This diff/PR amends the bias hook to work for Conv2d layers, in addition to Linear layers. The reshaping of the ._bias parameter ensures that it is added to the right dimension of the output.
ghstack-source-id: 135097700

Test Plan:
Added tests for `Conv2dB()`, a model with Conv2d layers that have `bias=True`.

`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MfgL

Reviewed By: jerryzh168

Differential Revision: D29979571

fbshipit-source-id: c1a7e9fabc8b3c9d0050bd6b6c6a631ddfdf2a68
2021-08-05 09:27:17 -07:00
Vasiliy Kuznetsov
b524a1101a ns for fx: add ref_node_target_type (#62685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62685

Adds a `ref_node_target_type` field to hold the string type
of the base node. This is needed because in some cases
the previous node does not match ref_node (if we have observers,
or if we are logging inputs), and it is useful to know the type
of ref_node.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D30082947

fbshipit-source-id: 98ded7b25a5d8d5ea820e0ef62c3799b65c3fc77
2021-08-05 09:26:10 -07:00
Jane Xu
b96acb7591 Allow disabled tests to be re-enabled with IGNORE_DISABLED_ISSUES (#62686)
Summary:
Part 1 of fixing https://github.com/pytorch/pytorch/issues/62359

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62686

Test Plan:
1. Check out this PR and run `python setup.py install`.
2. The test we will be running requires CUDA. If you don't have CUDA, you can try this on another device or simply comment out the skipIf statement before the `test_jit_cuda_extension` test in `test_cpp_extensions_jit.py`
3. Run: `IN_CI=1 python test/run_test.py -i test_cpp_extensions_jit -- -k test_jit_cuda_extension` and notice that it should skip. If it doesn't skip, edit test/.pytorch-disabled-tests.json: modify the platforms list of the first issue (61655) to include whatever platform you are on (macos or linux), and just run `python test/test_cpp_extensions_jit.py -v -k test_jit_cuda_extension --import-disabled-tests` to make sure it skips.
4. Now `export PYTORCH_IGNORE_DISABLED_ISSUES=61655` or `export PYTORCH_IGNORE_DISABLED_ISSUES=34952,61655`.
5. `rm test/.pytorch-*` to clear the cached files.
6. Run the same command as in step 5 and note that it SHOULDN'T skip. It should run.

Reviewed By: walterddr, samestep

Differential Revision: D30108773

Pulled By: janeyx99

fbshipit-source-id: dbf015a266f57577dc9283b0cdff720083b5c0cb
2021-08-05 09:05:40 -07:00
Nikita Shulga
24a2681358 Revert D30094460: [profiler] Re-enable test on Windows
Test Plan: revert-hammer

Differential Revision:
D30094460 (5a1017be97)

Original commit changeset: 80521f6bc136

fbshipit-source-id: 7c01493ad078be7df1bbb81c08be6364d6ffaa4d
2021-08-05 08:34:15 -07:00
Pavel Belevich
0c8ed042f2 Revert D30095246: [pytorch][PR] Enable ncclAvg for reductions
Test Plan: revert-hammer

Differential Revision:
D30095246 (a749180e4e)

Original commit changeset: d3a3475345fa

fbshipit-source-id: 34b5100b925859461296cae5a717a70e5eca6af6
2021-08-05 07:56:40 -07:00
cpatru
6d896cb545 Update faq.rst so OOM section mentions checkpoint (#62709)
Summary:
This FAQ has a section for CUDA OOMs where there are lots of don'ts. This limits modeling solution. Deep nets can blow up memory due to output caching during training.
It's a known problem with a known solution: to trade-off compute for memory via checkpointing.
FAQ should mention it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62709

Reviewed By: nairbv

Differential Revision: D30103326

Pulled By: ezyang

fbshipit-source-id: 3a8b465a7fbe19aae88f83cc50fe82ebafcb56c9
2021-08-05 07:40:08 -07:00
Edward Yang
b84885cc8b Add support for boxed functors (#62658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62658

Boxed functors, like their unboxed brethren, support operators which
aren't just a function pointer, but a function pointer with some
associated global state that is allocated at registration time.

The use case I have in mind with this implementation is "dispatcher
API from Python", where the extra state kernel registrations need is
the PyObject callable we will invoke to do the actual invocation.
See next PR in this stack.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D30074925

Pulled By: ezyang

fbshipit-source-id: ee040edbbec1e607486d338d1ea78bb5c6b2ece9
2021-08-05 07:26:09 -07:00
Alban Desmaison
e6a227465b Add serialization support for slots and subclass getstate/setstate (#62745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62745

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30113112

Pulled By: albanD

fbshipit-source-id: 6c562d0c060fb0280e5e3d432bb42fb833e6d500
2021-08-05 06:49:44 -07:00
Alban Desmaison
056b147e10 clean torch_function handling in serialization (#62744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62744

The `Tensor._reduce_ex_internal` function can only be called via the `Tensor.__reduce_ex__` function.
And that second function already properly handles the `__torch_function__` overwrites. So no need to handle them again in `Tensor._reduce_ex_internal`.

This PR also updates `Tensor.__reduce_ex__` to use the specialized unary API for `__torch_function__` that makes it nicer to read.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D30113113

Pulled By: albanD

fbshipit-source-id: c94f5d2597ee3afe799d9de991f75615c3c172d6
2021-08-05 06:48:26 -07:00
Sean Lawlor
ee82e7a14e [DDP Communication Hook] Renaming C++ calls to match python API closer (#62735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62735

Renamed the following
1. getTensor -> getBuffer
2. getTensorRef -> getBufferRef
3. setTensor -> setBuffer
and all associated private variables as well

Reviewed By: SciPioneer

Differential Revision: D30069124

fbshipit-source-id: fa8f1f8a7f3255e6242973bc37b3f7b2731af55d
2021-08-05 05:06:29 -07:00
Jiewen Tan
64b3ab6407 Improve IMethod::getArgumentNames to deal with empty argument names list (#62782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62782

This diff improved IMethod::getArgumentNames to deal with empty argument names list.

Test Plan:
buck test mode/dev caffe2/caffe2/fb/predictor:pytorch_predictor_test -- PyTorchDeployPredictor.GetEmptyArgumentNamesValidationMode
buck test mode/dev caffe2/caffe2/fb/predictor:pytorch_predictor_test -- PyTorchDeployPredictor.GetEmptyArgumentNamesRealMode

Reviewed By: wconstab

Differential Revision: D30038175

fbshipit-source-id: 46f08dda94187160b4d6ee87600d1b46fe934222
2021-08-05 01:32:00 -07:00
Dhruv Matani
019048b3b6 [PyTorch Edge] Simplify Exception Handling (Take-2) (module.cpp) (#62634)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62634

Apply the same set of changes as in D27688352 (d728491fc1) to `module.cpp` as instructed by xcheng16.

Basically, this simplifies exception handling and allows propagation of the original message undisturbed to the caller so that we can figure out the lineage of the exception in crash tasks such as t96812652
ghstack-source-id: 134877012

Test Plan: Build/Sandcastle

Reviewed By: raziel

Differential Revision: D30038867

fbshipit-source-id: 8dfd415c510bcd0ab49814f4eb559ec6fc8f72e5
2021-08-04 23:25:30 -07:00
Jiewen Tan
4b68801c69 Enable test_api IMethodTest in OSS (#62521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62521

This diff did the following few things to enable the tests:
1. Exposed IMethod as TORCH_API.
2. Linked torch_deploy to test_api if USE_DEPLOY == 1.

Test Plan:
./build/bin/test_api --gtest_filter=IMethodTest.*

To be noted, one needs to run `python torch/csrc/deploy/example/generate_examples.py` before the above command.

Reviewed By: ezyang

Differential Revision: D30055372

Pulled By: alanwaketan

fbshipit-source-id: 50eb3689cf84ed0f48be58cd109afcf61ecca508
2021-08-04 21:14:20 -07:00
Michael Carilli
a749180e4e Enable ncclAvg for reductions (#62303)
Summary:
[ncclAvg](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/types.html?highlight=ncclavg#c.ncclAvg) is a new `ncclRedOpt_t` that fuses a div-by-world-size with ncclAllReduce, Reduce, or ReduceScatter. This PR adds support.

This PR and https://github.com/pytorch/pytorch/pull/62140 lay the foundation for to DDP allreduce+average grad tensors in place with a single nccl call without additional memory pass(es) to flatten or average or unflatten. I'll write the necessary DDP changes once this PR and https://github.com/pytorch/pytorch/pull/62140 land.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62303

Reviewed By: soulitzer

Differential Revision: D30095246

Pulled By: rohan-varma

fbshipit-source-id: d3a3475345fafb0ab265c11d36db74d7c5613a0a
2021-08-04 19:43:50 -07:00
Zeina Migeed
4bd54cebe0 Refinement types and unification for symbolic shape inference (#61776)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61776

Test Plan: Imported from OSS

Reviewed By: iramazanli

Differential Revision: D29772537

Pulled By: migeed-z

fbshipit-source-id: 3555d43152a213087c64faa326432f1628eb3bb1
2021-08-04 17:34:29 -07:00
Hao Lu
a27a0b1ef5 [SR] Disable NNC temporarily (#62746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62746

Disable NNC temporarily until a code cache is implemented to reduce the compilation time.

Reviewed By: ajyu

Differential Revision: D30080326

fbshipit-source-id: ef8bb3ac3a6947614f4a03a3d52774b6933d3ea8
2021-08-04 17:33:07 -07:00
Nikita Shulga
afc1d1b3d6 Fix lint errors in cuda_ReportMemoryUsage tests (#62778)
Summary:
Introduced in 8bbcef5096

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62778

Reviewed By: chaekit, driazati

Differential Revision: D30120245

Pulled By: malfet

fbshipit-source-id: 2cb5755b870182dd147a6685c74f7defcc10030a
2021-08-04 17:26:23 -07:00
Matti Picus
658540f43f remove deprecated is_deterministic and set_deterministic (#62158)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58096

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62158

Reviewed By: mruberry

Differential Revision: D29909634

Pulled By: ezyang

fbshipit-source-id: ccffbcf8f378e39bd2c7fbeace7ed1cbbe003981
2021-08-04 16:45:23 -07:00
Kushashwa Ravi Shrimali
a705b8f08f OpInfo for nn.functional.relu (#62076)
Summary:
See https://github.com/facebookresearch/functorch/issues/78 and https://github.com/pytorch/pytorch/issues/54261.

cc: mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62076

Reviewed By: soulitzer

Differential Revision: D30013262

Pulled By: zou3519

fbshipit-source-id: 7df5e930d1588146e09cf58c53c8860392da7348
2021-08-04 15:50:18 -07:00
Yukio Siraichi
123be6b261 Port addcdiv to structured kernels. (#62319)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62319

Tracking issue: #55070

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D29961996

Pulled By: bdhirsh

fbshipit-source-id: d38141476b41dbfd4bf029d631f81a32aff82a5e
2021-08-04 15:35:25 -07:00
Yukio Siraichi
693b0af996 Port addcmul kernels to structured kernels. (#62318)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62318

Tracking issue: #55070

This PR introduces the method `TensorIteratorBase::build_ternary_op` for building a
`TensorIteratorBase` for 3-input 1-output kernel.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D29961997

Pulled By: bdhirsh

fbshipit-source-id: 2208d24823bad6e74c8d508f363716d8125b8619
2021-08-04 15:34:01 -07:00
Han Guangyun
8bbcef5096 Report more information for memory profiling (#61282)
Summary:
Report pointed memory size, total allocated memory, total reserved size all in one report.

`ptr` and `alloc_size` will be used for associating with op trace.
`allocated_size`, `reserved_size` will be used for memory trace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61282

Reviewed By: ejguan

Differential Revision: D29796282

Pulled By: chaekit

fbshipit-source-id: 5314c867632d3af1fa9a3811b35eaa5e931a5d87
2021-08-04 15:03:14 -07:00
CodemodService FBSourceClangFormatLinterBot
0aee9c0ef8 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D30097148

fbshipit-source-id: 514c22ea52f048bb048a53fa6b5ea57f3ac12250
2021-08-04 14:58:29 -07:00
Will Constable
aed01a991d Add hasattr to torch::deploy interface and hasMethod to PredictorContainer (#62669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62669

Useful to avoid having to implement null checking on the application side.

Test Plan: Add unit tests

Reviewed By: suo, houseroad

Differential Revision: D30074406

fbshipit-source-id: 881aec735953b43cb24786c1a2d79e8e724928b8
2021-08-04 14:48:34 -07:00
Qing Hu
281737ea6f [DDP Communication Hook] Rename 4 Methods of GradBucket Class
Summary:
1. getPerParameterTensors -> getGradients
2. getModelParamsForBucket -> getParameters
3. isTheLastBucketToAllreduce -> IsLast

Test Plan:
Test results for "buck test mode/dev-nosan caffe2/test/distributed:c10d":
https://pxl.cl/1Mrq8

Test results for "buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork":
https://pxl.cl/1MrtP

Reviewed By: SciPioneer

Differential Revision: D30076436

fbshipit-source-id: 0bd1e410186a318ea6328f4c1e830ea5632f8a47
2021-08-04 14:37:23 -07:00
Rong Rong (AI Infra)
7f1b672b7a Revert D29952381: [Static Runtime] Ensure that unittests only use out variants or native ops
Test Plan: revert-hammer

Differential Revision:
D29952381 (8737e17af2)

Original commit changeset: e60e70b80ccf

fbshipit-source-id: 59dc2f920b7ceaf94ba8f5f36024e7cc710f6645
2021-08-04 14:25:11 -07:00
Eli Uriegas
491d89da1b .github: Fix --no-build-suffix (#62739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62739

Original flag didn't initially work correctly so this makes it actually
output the right thing

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: janeyx99

Differential Revision: D30107694

Pulled By: seemethere

fbshipit-source-id: 5ff28d6820b9cf7145dbb617b86a941bf7686b5c
2021-08-04 14:19:38 -07:00
Kyle Matoba
de94034328 Fixes #62636 (#62670)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62636.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62670

Reviewed By: ezyang

Differential Revision: D30102179

Pulled By: soulitzer

fbshipit-source-id: 38480463ef354f2c12ed83e6678aed26b0b96efe
2021-08-04 13:58:21 -07:00
Nikita Vedeneev
8e35df0bf3 det_backward: return svd path for double backward (so that all ci tests pass) (#62570)
Summary:
Potentially fixes https://github.com/pytorch/pytorch/issues/62327 and fixes https://github.com/pytorch/pytorch/issues/62328.
This PR replaces the double backward of det from eig to svd. The latter is slower but should be more stable.

CC anjali411

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62570

Reviewed By: pbelevich

Differential Revision: D30072876

Pulled By: anjali411

fbshipit-source-id: c91b507dbfd6a3ec47dc6d0b0dcfa5f8c8228c30
2021-08-04 13:43:51 -07:00
kshitij12345
6f0abba04c [fix] manual_seed{_all}: mem leak (#62534)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/55768

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62534

Reviewed By: nairbv

Differential Revision: D30103294

Pulled By: ezyang

fbshipit-source-id: d871ae869314dfd2d27544a51107ab752abfe452
2021-08-04 13:03:12 -07:00
aeioaeu
89f898ebb5 Fix wrong command in README.md (#62472)
Summary:
If it is `[15^,16^)`, 16.10 is not included.
https://github.com/Microsoft/vswhere/wiki/Examples

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62472

Reviewed By: nairbv

Differential Revision: D30103199

Pulled By: ezyang

fbshipit-source-id: 82085627ca53cd5a4e666848d27d4ab062de8352
2021-08-04 12:55:18 -07:00
Karol Sputo
b454275f47 Support eager mode use of torch.jit.isinstance with multiple types (#60465)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60095

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60465

Reviewed By: soulitzer

Differential Revision: D30093110

Pulled By: ansley

fbshipit-source-id: ee9c654bdb031e9eff4837f9f1d489c81e47cc06
2021-08-04 12:45:24 -07:00
Ilia Cherniavskii
5a1017be97 [profiler] Re-enable test on Windows (#62703)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62703

Re-enable test on Windows

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D30094460

Pulled By: ilia-cher

fbshipit-source-id: 80521f6bc1365d2c252f20b5d0485fc062c8d9c3
2021-08-04 12:32:24 -07:00
Don Jang
8737e17af2 [Static Runtime] Ensure that unittests only use out variants or native ops (#62335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62335

This change ensures that unittests only use out variants or native ops.

- Our unittests currently assume that a graph fed to the static runtime correctly replaces an interpreter op for its corresponding out variant / native op, but it's not checked by the unittest. This change ensures that.

- We relied on manual inspection of log messages to see if an out variant is used for a specific workload even for unittesting. This change frees us from doing that.

- `aten::add` is excluded from this check since it's only enabled for an internal workload. Also some unittests are excluded by using `expect_interpreter_op  = true` since they are written to use interpreter ops by design.

Test Plan: Ran `buck run //caffe2/benchmarks/static_runtime:static_runtime_cpptest` successfully.

Reviewed By: mikeiovine, hlu1

Differential Revision: D29952381

fbshipit-source-id: e60e70b80ccf45e91c6654b4ad53f92ffd5ab702
2021-08-04 11:37:15 -07:00
Rong Rong (AI Infra)
de77c6a0eb [BE] fix bc check (#62687)
Summary:
a bug was discovered in https://github.com/pytorch/pytorch/issues/62434, for some reason comparing the schema name didn't match the allow_list item. So:
1. remove duplicate regex compile
2. make use of the schema string is used instead of just the name

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62687

Reviewed By: ezyang

Differential Revision: D30102437

Pulled By: walterddr

fbshipit-source-id: 541b2ed77948f24daebb08623cadabb034a241e0
2021-08-04 11:00:22 -07:00
Jane Xu
0a66416767 Rename master to main for test-infra references (#62728)
Summary:
Reacting to the main->master switch in test-infra

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62728

Reviewed By: samestep

Differential Revision: D30104777

Pulled By: janeyx99

fbshipit-source-id: a7af7dfc69fd6e02c30ad6c15808a5b32a68c587
2021-08-04 10:45:47 -07:00
Facebook Community Bot
90ba71f841 Automated submodule update: FBGEMM (#62688)
Summary:
This is an automated pull request to update the first-party submodule for [pytorch/FBGEMM](https://github.com/pytorch/FBGEMM).

New submodule commit: 10ec0d3388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62688

Test Plan: Ensure that CI jobs succeed on GitHub before landing.

Reviewed By: dskhudia

Differential Revision: D30088109

fbshipit-source-id: da8a1e6232e489eac0384faadb71c2dfac5927f7
2021-08-04 10:40:50 -07:00
Jagadish Krishnamoorthy
8bcf01631a [ROCm] update magma (#62502)
Summary:
Update magma to point to magma_ctrl_launch_bounds branch.
When upstream magma branch is used,  cholesky tests in test_ops.py and test_linalg.py
fails due to "Intel MKL ERROR: Parameter 4 was incorrect on entry to DPOTRF."
Suspect commit: [35325212b15c5baadd7493d61b19b2db2635cb68](35325212b1) in magma master.

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62502

Reviewed By: malfet

Differential Revision: D30089171

Pulled By: seemethere

fbshipit-source-id: b07234ce66d48e3af113640995f923ee586b3cd9
2021-08-04 10:19:55 -07:00
Rong Rong (AI Infra)
dfdc3069e7 Revert D30072994: [pytorch][PR] [6/n Update test rpc path
Test Plan: revert-hammer

Differential Revision:
D30072994 (ad4e1f1132)

Original commit changeset: 3217e764bd85

fbshipit-source-id: cf89df78a4e04ef03b04ec3c253c5cbb1a1f5f63
2021-08-04 10:14:31 -07:00
Sean Lawlor
34c9f5a8da [DDP Communication Hook] Update get_tensor and set_tensor to be cleaner naming conventions (buffer() and set_buffer()) (#62662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62662

Replaced the methods set_tensor(.) and get_tensor() in the python exposed API from the C++ logic with buffer() and set_buffer(.) to be a cleaner interface.

Reviewed By: SciPioneer

Differential Revision: D30012869

fbshipit-source-id: bd8efab583dd89c96f9aeb3dd48a12073f0b1482
2021-08-04 09:27:31 -07:00
Kevin Tse
4b47ea9446 adding a skip for ROCm for a flaky test (#62664)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62664

Skipping a test for ROCm because of issue #62602

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D30079534

Pulled By: NivekT

fbshipit-source-id: a9cf35e5d3a8d218edc9c5a704d1f9599d2f38a6
2021-08-04 07:29:06 -07:00
Nikita Shulga
d1c85d2c06 Move ASAN tests to clang-7 (#62663)
Summary:
This should avoid following false positives:
```
[ RUN      ] ProtoTest.Basic
/var/lib/jenkins/workspace/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:7060:15: runtime error: member call on address 0x7fffffffdd80 which does not point to an object of type 'google::protobuf::MessageLite'
0x7fffffffdd80: note: object is of type 'onnx_torch::ModelProto'
 00 00 00 00  b0 b9 05 ef ff 7f 00 00  00 00 00 00 00 00 00 00  01 00 00 00 00 00 00 00  00 00 00 00
              ^~~~~~~~~~~~~~~~~~~~~~~
              vptr for 'onnx_torch::ModelProto'
 UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h:7060:15 in
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62663

Reviewed By: tktrungna

Differential Revision: D30076315

Pulled By: malfet

fbshipit-source-id: 7bfc2c4b417307195e3c3379e4874eaceb4f3134
2021-08-04 06:26:03 -07:00
Ilia Cherniavskii
773a8eede4 [profiler][refactor] Refactor the usage of legacy profiler implementation (#61931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61931

This PR consolidates the profiling code around a new C++ implementation
(profiler_kineto.h/cpp) and uses it unconditionally from
torch.autograd.profiler/torch.profiler:
1. Always use profiler_kineto.h/cpp as the C++ implementation
2. Simplify profiler.py to remove unneeded parts depending on legacy
impl
3. Move some of the legacy logic into profiler_legacy.py (to be fully
deleted later)

Test Plan:
USE_KINETO=1 USE_CUDA=1 USE_MKLDNN=1 BLAS=MKL BUILD_BINARY=1 python setup.py develop install --cmake
python test/test_profiler.py -v
USE_KINETO=0 USE_CUDA=1 USE_MKLDNN=1 BLAS=MKL BUILD_BINARY=1 python setup.py develop install --cmake
python test/test_profiler.py -v

Imported from OSS

Reviewed By: gdankel

Differential Revision: D29801599

fbshipit-source-id: 9794d29f2af38dddbcd90dbce4481fc8575fa29e
2021-08-03 18:51:29 -07:00