pytorch/test/cpp_extensions/setup.py

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

119 lines
3.3 KiB
Python
Raw Normal View History

import os
Restructure torch/torch.h and extension.h (#13482) Summary: This PR restructures the public-facing C++ headers in a backwards compatible way. The problem right now is that the C++ extension header `torch/extension.h` does not include the C++ frontend headers from `torch/torch.h`. However, those C++ frontend headers can be convenient. Further, including the C++ frontend main header `torch/torch.h` in a C++ extension currently raises a warning because we want to move people away from exclusively including `torch/torch.h` in extensions (which was the correct thing 6 months ago), since that *used* to be the main C++ extension header but is now the main C++ frontend header. In short: it should be possible to include the C++ frontend functionality from `torch/torch.h`, but without including that header directly because it's deprecated for extensions. For clarification: why is `torch/torch.h` deprecated for extensions? Because for extensions we need to include Python stuff, but for the C++ frontend we don't want this Python stuff. For now the python stuff is included in `torch/torch.h` whenever the header is used from a C++ extension (enabled by a macro passed by `cpp_extensions.py`) to not break existing users, but this should change in the future. The overall fix is simple: 1. C++ frontend sub-headers move from `torch/torch.h` into `torch/all.h`. 2. `torch/all.h` is included in: 1. `torch/torch.h`, as is. 2. `torch/extensions.h`, to now also give C++ extension users this functionality. With the next release we can then: 1. Remove the Python includes from `torch/torch.h` 2. Move C++-only sub-headers from `all.h` back into `torch.h` 3. Make `extension.h` include `torch.h` and `Python.h` This will then break old C++ extensions that include `torch/torch.h`, since the correct header for C++ extensions is `torch/extension.h`. I've also gone ahead and deprecated `torch::CPU` et al. since those are long due to die. ezyang soumith apaszke fmassa Pull Request resolved: https://github.com/pytorch/pytorch/pull/13482 Differential Revision: D12924999 Pulled By: goldsborough fbshipit-source-id: 5bb7bdc005fcb7b525195b769065176514efad8a
2018-11-06 00:44:45 +00:00
import sys
from setuptools import setup
import torch.cuda
from torch.testing._internal.common_utils import IS_WINDOWS
from torch.utils.cpp_extension import (
BuildExtension,
CppExtension,
CUDA_HOME,
CUDAExtension,
ROCM_HOME,
)
if sys.platform == "win32":
vc_version = os.getenv("VCToolsVersion", "")
if vc_version.startswith("14.16."):
CXX_FLAGS = ["/sdl"]
else:
CXX_FLAGS = ["/sdl", "/permissive-"]
else:
CXX_FLAGS = ["-g"]
USE_NINJA = os.getenv("USE_NINJA") == "1"
Restructure torch/torch.h and extension.h (#13482) Summary: This PR restructures the public-facing C++ headers in a backwards compatible way. The problem right now is that the C++ extension header `torch/extension.h` does not include the C++ frontend headers from `torch/torch.h`. However, those C++ frontend headers can be convenient. Further, including the C++ frontend main header `torch/torch.h` in a C++ extension currently raises a warning because we want to move people away from exclusively including `torch/torch.h` in extensions (which was the correct thing 6 months ago), since that *used* to be the main C++ extension header but is now the main C++ frontend header. In short: it should be possible to include the C++ frontend functionality from `torch/torch.h`, but without including that header directly because it's deprecated for extensions. For clarification: why is `torch/torch.h` deprecated for extensions? Because for extensions we need to include Python stuff, but for the C++ frontend we don't want this Python stuff. For now the python stuff is included in `torch/torch.h` whenever the header is used from a C++ extension (enabled by a macro passed by `cpp_extensions.py`) to not break existing users, but this should change in the future. The overall fix is simple: 1. C++ frontend sub-headers move from `torch/torch.h` into `torch/all.h`. 2. `torch/all.h` is included in: 1. `torch/torch.h`, as is. 2. `torch/extensions.h`, to now also give C++ extension users this functionality. With the next release we can then: 1. Remove the Python includes from `torch/torch.h` 2. Move C++-only sub-headers from `all.h` back into `torch.h` 3. Make `extension.h` include `torch.h` and `Python.h` This will then break old C++ extensions that include `torch/torch.h`, since the correct header for C++ extensions is `torch/extension.h`. I've also gone ahead and deprecated `torch::CPU` et al. since those are long due to die. ezyang soumith apaszke fmassa Pull Request resolved: https://github.com/pytorch/pytorch/pull/13482 Differential Revision: D12924999 Pulled By: goldsborough fbshipit-source-id: 5bb7bdc005fcb7b525195b769065176514efad8a
2018-11-06 00:44:45 +00:00
ext_modules = [
CppExtension(
"torch_test_cpp_extension.cpp", ["extension.cpp"], extra_compile_args=CXX_FLAGS
),
CppExtension(
"torch_test_cpp_extension.maia",
["maia_extension.cpp"],
extra_compile_args=CXX_FLAGS,
),
CppExtension(
"torch_test_cpp_extension.rng",
["rng_extension.cpp"],
extra_compile_args=CXX_FLAGS,
),
]
Hipify revamp [REDUX] (#48715) Summary: [Refiled version of earlier PR https://github.com/pytorch/pytorch/issues/45451] This PR revamps the hipify module in PyTorch to overcome a long list of shortcomings in the original implementation. However, these improvements are applied only when using hipify to build PyTorch extensions, not for PyTorch or Caffe2 itself. Correspondingly, changes are made to cpp_extension.py to match these improvements. The list of improvements to hipify is as follows: 1. Hipify files in the same directory as the original file, unless there's a "cuda" subdirectory in the original file path, in which case the hipified file will be in the corresponding file path with "hip" subdirectory instead of "cuda". 2. Never hipify the file in-place if changes are introduced due to hipification i.e. always ensure the hipified file either resides in a different folder or has a different filename compared to the original file. 3. Prevent re-hipification of already hipified files. This avoids creation of unnecessary "hip/hip" etc. subdirectories and additional files which have no actual use. 4. Do not write out hipified versions of files if they are identical to the original file. This results in a cleaner output directory, with minimal number of hipified files created. 5. Update header rewrite logic so that it accounts for the previous improvement. 6. Update header rewrite logic so it respects the rules for finding header files depending on whether "" or <> is used. 7. Return a dictionary of mappings of original file paths to hipified file paths from hipify function. 8. Introduce a version for hipify module to allow extensions to contain back-compatible code that targets a specific point in PyTorch where the hipify functionality changed. 9. Update cuda_to_hip_mappings.py to account for the ROCm component subdirectories inside /opt/rocm/include. This also results in cleanup of the Caffe2_HIP_INCLUDE path to remove unnecessary additions to the include path. The list of changes to cpp_extension.py is as follows: 1. Call hipify when building a CUDAExtension for ROCm. 2. Prune the list of source files to CUDAExtension to include only the hipified versions of any source files in the list (if both original and hipified versions of the source file are in the list) 3. Add subdirectories of /opt/rocm/include to the include path for extensions, so that ROCm headers for subcomponent libraries are found automatically cc jeffdaily sunway513 ezyang Pull Request resolved: https://github.com/pytorch/pytorch/pull/48715 Reviewed By: bdhirsh Differential Revision: D25272824 Pulled By: ezyang fbshipit-source-id: 8bba68b27e41ca742781e1c4d7b07c6f985f040e
2020-12-03 02:00:15 +00:00
if torch.cuda.is_available() and (CUDA_HOME is not None or ROCM_HOME is not None):
extension = CUDAExtension(
"torch_test_cpp_extension.cuda",
[
"cuda_extension.cpp",
"cuda_extension_kernel.cu",
"cuda_extension_kernel2.cu",
],
extra_compile_args={"cxx": CXX_FLAGS, "nvcc": ["-O2"]},
)
ext_modules.append(extension)
if torch.cuda.is_available() and (CUDA_HOME is not None or ROCM_HOME is not None):
extension = CUDAExtension(
"torch_test_cpp_extension.torch_library",
["torch_library.cu"],
extra_compile_args={"cxx": CXX_FLAGS, "nvcc": ["-O2"]},
)
ext_modules.append(extension)
if torch.backends.mps.is_available():
extension = CppExtension(
"torch_test_cpp_extension.mps",
["mps_extension.mm"],
extra_compile_args=CXX_FLAGS,
)
ext_modules.append(extension)
# todo(mkozuki): Figure out the root cause
if (not IS_WINDOWS) and torch.cuda.is_available() and CUDA_HOME is not None:
# malfet: One should not assume that PyTorch re-exports CUDA dependencies
cublas_extension = CUDAExtension(
name="torch_test_cpp_extension.cublas_extension",
sources=["cublas_extension.cpp"],
libraries=["cublas"] if torch.version.hip is None else [],
)
ext_modules.append(cublas_extension)
cusolver_extension = CUDAExtension(
name="torch_test_cpp_extension.cusolver_extension",
sources=["cusolver_extension.cpp"],
libraries=["cusolver"] if torch.version.hip is None else [],
)
ext_modules.append(cusolver_extension)
if (
USE_NINJA
and (not IS_WINDOWS)
and torch.cuda.is_available()
and CUDA_HOME is not None
):
extension = CUDAExtension(
name="torch_test_cpp_extension.cuda_dlink",
sources=[
"cuda_dlink_extension.cpp",
"cuda_dlink_extension_kernel.cu",
"cuda_dlink_extension_add.cu",
],
dlink=True,
extra_compile_args={"cxx": CXX_FLAGS, "nvcc": ["-O2", "-dc"]},
)
ext_modules.append(extension)
setup(
name="torch_test_cpp_extension",
packages=["torch_test_cpp_extension"],
ext_modules=ext_modules,
include_dirs="self_compiler_include_dirs_test",
cmdclass={"build_ext": BuildExtension.with_options(use_ninja=USE_NINJA)},
entry_points={
"torch.backends": [
"device_backend = torch_test_cpp_extension:_autoload",
],
},
)