mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Use fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries (#81058)
Fixes: #80489 Test using cuda 11.3 manywheel binary: ``` import torch print(torch.__version__) print(torch._C._PYBIND11_BUILD_ABI) ```` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 ``` Functorch test torch : 1.13.0.dev20220707+cu113, functorch with cu102 ``` import torch print(torch.__version__) print(torch._C._PYBIND11_BUILD_ABI) from functorch import vmap x = torch.randn(2, 3, 5) vmap(lambda x: x, out_dims=3)(x) ``` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 /home/atalman/temp/testc1.py:5: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:73.) x = torch.randn(2, 3, 5) Traceback (most recent call last): File "/home/atalman/temp/testc1.py", line 6, in <module> vmap(lambda x: x, out_dims=3)(x) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 361, in wrapped return _flat_vmap( File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 488, in _flat_vmap return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 165, in _unwrap_batched flat_outputs = [ File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 166, in <listcomp> _remove_batch_dim(batched_output, vmap_level, batch_size, out_dim) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3) ``` Related Builder PR: https://github.com/pytorch/builder/pull/1083 Test PR: https://github.com/pytorch/pytorch/pull/81232 Pull Request resolved: https://github.com/pytorch/pytorch/pull/81058 Approved by: https://github.com/zou3519, https://github.com/malfet
This commit is contained in:
parent
d5bda29207
commit
d552ba3b4f
1 changed files with 4 additions and 0 deletions
|
|
@ -44,6 +44,10 @@ if(DEFINED GLIBCXX_USE_CXX11_ABI)
|
|||
if(${GLIBCXX_USE_CXX11_ABI} EQUAL 1)
|
||||
set(CXX_STANDARD_REQUIRED ON)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_CXX11_ABI=1")
|
||||
else()
|
||||
# Please note this is required in order to ensure compatibility between gcc 9 and gcc 7
|
||||
# This could be removed when all Linux PyTorch binary builds are compiled by the same toolchain again
|
||||
string(APPEND CMAKE_CXX_FLAGS " -fabi-version=11")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue