Commit graph

47509 commits

Author SHA1 Message Date
PyTorch MergeBot
b9bb52d97b Revert "Put symint overloads on a different name"
This reverts commit 213a8fc992.

Reverted https://github.com/pytorch/pytorch/pull/79281 on behalf of https://github.com/bigfootjon due to Diff reverted internally
2022-06-15 17:15:21 +00:00
dzdang
e20c6a89f8 [quant][core][improvement] Added warnings to quantized dynamic conv and linear ops when reduce_range=true
Summary:
Previously, the backend code "silently" ignores reduce_range=true when
using the qnnpack backend (which does not require a reduction in range).
We evaluated either 1) respecting the reduction in range to conform with
other backends (e.g., fbgemm) even when qnnpack does support the full
range and outputting a warning to let the user know that reduce_range
shoudl be set to false for qnnpack backend 2) throwing a warning and letting the user know that the
reduce_range=true setting is being ignored.

Option 1 would halve the range which could have some negative
implications to accuracy and lead to bc-breaking changes. Option 2 is also not ideal because it ignores any user settings
for reduce_range=true when using the qnnpack backend with dynamic and
linear quantized ops. We decided to go with option 2 as it is not
bc-breaking.

Fixes #68278

Test plan:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79273

Approved by: https://github.com/jerryzh168, https://github.com/vkuzo
2022-06-15 17:14:57 +00:00
Nikita Shulga
09df27fe45 Revert "Revert "[distributed] Handle object collectives and NCCL. (#79034)""
This reverts commit 279634f384.
2022-06-15 10:04:37 -07:00
Nikita Shulga
d79f99c4b4 Revert "Revert "[ci] convert empty s3 artifacts from error to warning""
This reverts commit 5de9f42486.
2022-06-15 10:04:23 -07:00
Nikita Shulga
a083199d2e Revert "Revert "[ci] remove remaining RDS dependency""
This reverts commit 21e32d5a0b.
2022-06-15 10:04:13 -07:00
Jane Xu
f2f4cdc9e5 [bc test] pull nightly from before the base commit (#79570)
Fixes #79146

Test plan:
https://github.com/pytorch/pytorch/runs/6902235940?check_suite_focus=true passes and installs yesterday's nightly

new test also passes: https://github.com/pytorch/pytorch/runs/6903723789?check_suite_focus=true#step:9:640
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79570
Approved by: https://github.com/malfet, https://github.com/albanD
2022-06-15 16:56:57 +00:00
Michael Suo
bc82a5f79c [ci] turn sccache stats error into warning
This unbreaks ci

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79443

Approved by: https://github.com/janeyx99
2022-06-15 16:30:13 +00:00
PyTorch MergeBot
21e32d5a0b Revert "[ci] remove remaining RDS dependency"
This reverts commit 964d505958.

Reverted https://github.com/pytorch/pytorch/pull/79370 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-06-15 16:29:46 +00:00
PyTorch MergeBot
5de9f42486 Revert "[ci] convert empty s3 artifacts from error to warning"
This reverts commit d42d5fe778.

Reverted https://github.com/pytorch/pytorch/pull/79397 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-06-15 16:26:35 +00:00
PyTorch MergeBot
2579b3ed77 Revert "[ci] turn sccache stats error into warning"
This reverts commit c31398653c.

Reverted https://github.com/pytorch/pytorch/pull/79443 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-06-15 16:23:39 +00:00
PyTorch MergeBot
279634f384 Revert "[distributed] Handle object collectives and NCCL. (#79034)"
This reverts commit 4ebb326b75.

Reverted https://github.com/pytorch/pytorch/pull/79034 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-06-15 16:16:21 +00:00
zengk95
dcf381e982 Fix BC Test for SymInt (#79612)
see title
```
2022-06-15T15:20:12.5183743Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
2022-06-15T15:20:12.5183765Z
2022-06-15T15:20:12.5183880Z Broken ops: [
2022-06-15T15:20:12.5184275Z 	aten::empty.SymInt(SymInt[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> Tensor
2022-06-15T15:20:12.5184342Z ]
2022-06-15T15:20:12.6303306Z + cleanup
2022-06-15T15:20:12.6303395Z + retcode=1
2022-06-15T15:20:12.6303463Z + set +x
2022-06-15T15:20:12.6345211Z ##[error]Process completed with exit code 1.
2022-06-15T15:20:12.6377724Z Prepare all required actions
2022-06-15T15:20:12.6377844Z Getting action download info

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79612
Approved by: https://github.com/malfet
2022-06-15 15:54:40 +00:00
swang392
6015987dc3 Added edge case checking in isGreen (#79565)
Relates to #76700

**Overview**: One edge case not accounted for in the original logic of `isGreen` was for commits with no workflow checks. Similarly, if any of the required checks are not present (ex: if all of the pull checks are skipped), the workflow should not be promoteble. A commit should only be promoteable if there is it least one workflow check from each required group present (i.e. none of them are skipped)

**Test Plan:** Verify that commits on the HUD with no workflow checks are not considered promote-able. Added a test case with no workflows in `test_print_latest_commits.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79565
Approved by: https://github.com/seemethere
2022-06-15 15:49:16 +00:00
Jane Xu
d2fbfe7fce [ONNX] subscribe onnx to our custom test infra (#79546)
Remove as many references as can be easily done of unittest in favor of our custom infra.

Left a todo where I could not easily replace unittest.main with run_tests()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79546
Approved by: https://github.com/seemethere
2022-06-15 15:00:04 +00:00
Nikita Shulga
6a96bda445 [BE] Add clang-format changes to blame-ignore-revs (#79593)
Includes 30fb2c4aba and 95b15c266b

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79593
Approved by: https://github.com/janeyx99
2022-06-15 14:59:47 +00:00
Joel Benjamin Schlosser
5953fd9133 Revert behavior of Dropout2d on 3D inputs to 1D channel-wise dropout behavior & warn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79549

Approved by: https://github.com/ngimel, https://github.com/albanD
2022-06-15 14:56:43 +00:00
Joel Benjamin Schlosser
2d73c8e6e0 Add Dropout1d module
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79545

Approved by: https://github.com/ngimel, https://github.com/albanD
2022-06-15 14:39:07 +00:00
Alex Zhuang
081ff9602a Correct torch.nn.CrossEntropyLoss output shape specification (#79568)
Fixes #79531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79568
Approved by: https://github.com/jbschlosser
2022-06-15 14:28:02 +00:00
swang392
69971dd111 [fixed] Make GHA workflow to retrieve latest promote-able SHA from master (#79559)
Relates to #76700

**Overview**: Wrote GHA to get the latest commit SHA. Another component of the script is pushing this SHA to the viable/strict branch, which I will test on pytorch/pytorch-canary.

Todo in the next PR: add comment explaining cron, replace package installation statements with txt file

**Test Plan:** Monitor github actions results to see if the SHA printed is correct by running GHA on pytorch/pytorch-canary. The successful test workflow is [here](https://github.com/pytorch/pytorch-canary/runs/6888486129?check_suite_focus=true).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79559
Approved by: https://github.com/janeyx99
2022-06-15 14:20:04 +00:00
PyTorch MergeBot
25e3331d7c Revert "Add support for multiply on direct SymInt"
This reverts commit 6b015af729.

Reverted https://github.com/pytorch/pytorch/pull/79493 on behalf of https://github.com/ezyang due to this land races with a revert
2022-06-15 14:06:45 +00:00
PyTorch MergeBot
b8db0a0475 Revert "Python Bindings for SymInts (#78135)"
This reverts commit d332724071.

Reverted https://github.com/pytorch/pytorch/pull/78135 on behalf of https://github.com/ezyang due to broke torchvision tests
2022-06-15 13:52:14 +00:00
PyTorch MergeBot
aa9d25efc0 Revert "Add support for directly passing symint to empty"
This reverts commit 05664a957e.

Reverted https://github.com/pytorch/pytorch/pull/79494 on behalf of https://github.com/ezyang due to conflicts with earlier diff that needs revert
2022-06-15 13:49:56 +00:00
Nikolay Korovaiko
83e575c510 have a common interface to extract metadata from SizeNodes (#78088)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78088
Approved by: https://github.com/JackCaoG, https://github.com/wconstab
2022-06-15 04:59:08 +00:00
Nirav Mehta
e81ab046bd Skip stale check when facebook-github-bot is merging (#79572)
# Summary

ShipIt jobs triggered by co-development workflows are failing to merge PRs due to stale checks.  This diff skips the stale check when merge is triggered by `facebook-github-bot`.

Sample merge failure: https://github.com/pytorch/pytorch/pull/78654#issuecomment-1155607617
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79572
Approved by: https://github.com/bigfootjon, https://github.com/seemethere, https://github.com/malfet
2022-06-15 03:24:51 +00:00
vspenubarthi
38952d9350 [ao] Added function to inform dynamic vs static appropriate
Summary: The _detect_dynamic_vs_static function was added to take in a
prepared fx graph model that already had ModelReportObservers built into
it and uses the collected information to determine whether input and
output are stationary or non-stationary and provides feedback on whether
to make linear modules static or dynamic based on this information.

This PR will be followed up soon with another PR that will more
rigoursly test the whole end to end performance of this system, which is
primarily how the function in this PR will be tested for functionality,
which is why this one only has 1 test.

Test Plan: python test/quantization/fx/test_model_report_fx.py TestModelReportDetectDynamicStatic

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79326

Approved by: https://github.com/HDCharles
2022-06-15 02:51:27 +00:00
Ivan Yashchuk
e10b762537 Enable torch._refs.var for nvFuser executor (#79517)
This PR adds variance function with correction argument to nvFuser.

Now it's possible to run
```py
import torch
import torch._refs
from torch._prims.executor import make_traced

def foo1(a):
    return torch._refs.var(a, keepdim=False, unbiased=False)

def foo2(a):
    return torch._refs.var(a, keepdim=False, correction=2)

a = torch.randn(3, 3, device='cuda')
make_traced(foo1)(a, executor="nvfuser")
make_traced(foo2)(a, executor="nvfuser")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79517
Approved by: https://github.com/mruberry, https://github.com/jjsjann123
2022-06-14 23:08:53 +00:00
Akshay Parashar
20675977bc [Static Runtime] Performance optimization for fork operation (#79482)
Summary:
- StaticModule was being created at runtime which was adding overhead to the forked operation
- Move staticModule creation to outside of runtime so that StaticRuntime instance can be created on top of same staticModule that is created once

Differential Revision: D37126923

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79482
Approved by: https://github.com/tenpercent
2022-06-14 22:31:15 +00:00
Parth Savla
35b130ec0f [Vulkan] added vulkan threshold op (#78654)
Summary: implemented threshold op for vulkan

Test Plan: buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac

Differential Revision: D36681867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78654
Approved by: https://github.com/SS-JIA
2022-06-14 21:58:04 +00:00
Kevin Tse
22c7b1ddb5 [DataPipe] Fix error message coming from singler iterator constraint
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79547

Approved by: https://github.com/ejguan
2022-06-14 21:38:36 +00:00
PyTorch MergeBot
bad7720dde [xla hash update] update the pinned xla hash (#79172)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/master/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79172
Approved by: https://github.com/zengk95
2022-06-14 21:37:33 +00:00
Michael Suo
6e2f9ece4c [ci/docs] add some documentation about the stats uploading process
This process is pretty confusing, so wrote it down.

[skip ci]

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79504

Approved by: https://github.com/janeyx99
2022-06-14 21:35:04 +00:00
jpvillam
aff7eef476 [ROCm] Enable some sparse tests on ROCm (#77877)
Enabling:
test_sampled_addmm_errors_cuda_complex128
test_sampled_addmm_errors_cuda_complex64
test_sampled_addmm_errors_cuda_float32
test_sampled_addmm_errors_cuda_float64
test_sparse_add_cuda_complex128
test_sparse_add_cuda_complex64

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77877
Approved by: https://github.com/pruthvistony, https://github.com/malfet
2022-06-14 21:11:35 +00:00
Jeff Daily
20d56d2b32 increase sleep for TestCuda.test_caching_pinned_memory_multi_gpu (#76601)
Fixes #68299.  Fixes #70875.

Test is flaky on ROCm because the HIP runtime occasionally copies asynchronously too quickly for the current sleep value of 50ms.  This is not a bug.  Increasing the sleep value to 1s to avoid flakiness.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76601
Approved by: https://github.com/pruthvistony, https://github.com/malfet
2022-06-14 21:10:35 +00:00
Jeff Daily
adaafeedb1 do not write sccache stats to json if missing OUR_GITHUB_JOB_ID (#79541)
Allows use of .jenkins/pytorch/build.sh without assuming OUR_GITHUB_JOB_ID is set.  This is a regression caused by #79366.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79541
Approved by: https://github.com/suo, https://github.com/seemethere
2022-06-14 20:35:34 +00:00
Edward Z. Yang
05664a957e Add support for directly passing symint to empty
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79494

Approved by: https://github.com/albanD
2022-06-14 20:34:20 +00:00
Edward Z. Yang
6b015af729 Add support for multiply on direct SymInt
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79493

Approved by: https://github.com/albanD
2022-06-14 20:34:20 +00:00
zengk95
28d1216bd5 Skip Flaky ONNX Test (#79556)
See title

Addresses https://github.com/pytorch/pytorch/issues/79540

Error it's causing:
```
2022-06-14T16:29:53.6335274Z Results (1120.92s):
2022-06-14T16:29:53.6335495Z      393 passed
2022-06-14T16:29:53.6335710Z        1 failed
2022-06-14T16:29:53.6336041Z          - test/onnx/test_models.py:155 TestModels_new_jit_API.test_inception
2022-06-14T16:29:53.6336326Z       60 skipped
2022-06-14T16:29:54.4670969Z ##[error]Process completed with exit code 1.
2022-06-14T16:29:54.4730658Z Prepare all required actions
2022-06-14T16:29:54.4730993Z Getting action download info
<probably uninteresting folded group, click to show>
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79556
Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-06-14 20:23:52 +00:00
Nikita Shulga
e895672b35 Followup fix after #78828 (#79554)
Will be skipped when imported internally, for more details see https://www.internalfb.com/diff/D37114156?src_version_fbid=3331368873807344

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79554
Approved by: https://github.com/albanD
2022-06-14 20:20:18 +00:00
lezcano
549a597c00 Port linalg_eigh and linalg_eigvalsh to structured
This follows the structure of linalg.svd.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79072

Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-06-14 20:17:01 +00:00
Ivan Yashchuk
4fc7832d72 Reference implementations for softmax, log_softmax, logsumexp (#79423)
This PR adds references for:

- `torch.softmax`
- `torch.log_softmax`
- `torch.logsumexp`

Unfortunately, none of them currently pass `test_python_ref_executor` even with `"aten"` executor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79423
Approved by: https://github.com/mruberry
2022-06-14 19:43:51 +00:00
Ivan Yashchuk
8895862744 Enable torch._refs.mean for nvFuser executor (#79444)
This PR fixes a bug with `broadcast_in_dim` leading to the situation when reduction ops were not allowed to be used before `broadcast_in_dim`.

With this PR it's possible to run
```py
import torch
import torch._refs
from torch._prims.executor import make_traced

def foo(a):
    return torch._refs.mean(a, keepdim=False)

a = torch.randn(3, 3, device='cuda')
make_traced(foo)(a, executor="nvfuser")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79444
Approved by: https://github.com/mruberry, https://github.com/jjsjann123
2022-06-14 19:42:07 +00:00
Jane Xu
0005ad8801 Skip extremely long chebyshev/legendre tests introduced in #78304 (#79529)
See https://github.com/pytorch/pytorch/issues/79528

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79529
Approved by: https://github.com/mruberry, https://github.com/malfet
2022-06-14 19:29:59 +00:00
PyTorch MergeBot
dde81c20f9 Revert "Make GHA workflow to retrieve latest promote-able SHA from master (#79548)"
This reverts commit e479daed78.

Reverted https://github.com/pytorch/pytorch/pull/79548 on behalf of https://github.com/malfet due to Broke on trunk, see e479daed78
2022-06-14 19:15:20 +00:00
vspenubarthi
8e05513152 [ao] Added ModelReportObserver to inform on dynamic vs static
Summary: The purpose of this is to add to the module report functioality
by creating an observer that will take a prepared fx module and suggest
whether static or dynamic quantization is more appropriate. The tests
for this have been written and included in the location indicated by the
Test Plan

Test Plan: python test/quantization/fx/test_model_report_fx.py TestModelReportObserver

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79243

Approved by: https://github.com/jerryzh168, https://github.com/andrewor14
2022-06-14 19:08:40 +00:00
erjia
04f87f2ab9 [DataLoader] Fix the world_size when distributed sharding MapDataPipe (#79524)
Fixes #79449

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79524
Approved by: https://github.com/NivekT, https://github.com/VitalyFedyunin
2022-06-14 19:03:57 +00:00
swang392
e479daed78 Make GHA workflow to retrieve latest promote-able SHA from master (#79548)
Relates to #76700

**Overview:** Wrote GHA to get the latest commit SHA. Another component of the script is pushing this SHA to the viable/strict branch but I'm planning to test that locally after verifying that this part is correct.

**Test Plan:** Monitor github actions results to see if the SHA printed is correct -- I wasn't able to check this on my personal fork.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79548
Approved by: https://github.com/seemethere
2022-06-14 19:02:04 +00:00
Jordan Fix
f614f66acf [const_fold] Set requires_grad based on the folded tensor; add device_for_folding option (#79067)
Summary: att

Test Plan: Added unit test coverage for tensor_meta part.

Reviewed By: wushirong

Differential Revision: D36975932

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79067
Approved by: https://github.com/dborkovic
2022-06-14 19:00:05 +00:00
Han Qi (qihqi)
577f87bbff Make flatbuffer loads faster if loading as mobile module. (#78998)
BCFC check: verified that flatbuffer file created in this commit can
be loaded in HEAD and file created in HEAD can be loaded in this commit

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78998
Approved by: https://github.com/zhxchen17
2022-06-14 18:57:01 +00:00
Nikita Shulga
81cd276d61 [MPS] Support stride of stride
Fixes https://github.com/pytorch/pytorch/issues/79181

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79521

Approved by: https://github.com/kulinseth
2022-06-14 18:49:44 +00:00
Antonio Kim
51b65cd765 Fix warning: cast from type const char* to type char* casts away qualifiers (#79520)
Do not cast `__FILE__` to `(char*)` in order to eliminate the prevalent `-Wcast-qual` warnings from showing up.

Fixes #79519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79520
Approved by: https://github.com/malfet
2022-06-14 18:04:00 +00:00