2022-11-19 01:13:08 +00:00
|
|
|
build --cxxopt=--std=c++17
|
2020-04-07 05:48:33 +00:00
|
|
|
build --copt=-I.
|
2022-01-26 11:41:33 +00:00
|
|
|
# Bazel does not support including its cc_library targets as system
|
|
|
|
|
# headers. We work around this for generated code
|
|
|
|
|
# (e.g. c10/macros/cmake_macros.h) by making the generated directory a
|
|
|
|
|
# system include path.
|
2020-04-07 05:48:33 +00:00
|
|
|
build --copt=-isystem --copt bazel-out/k8-fastbuild/bin
|
2022-03-02 20:27:41 +00:00
|
|
|
build --copt=-isystem --copt bazel-out/darwin-fastbuild/bin
|
2021-12-17 21:41:24 +00:00
|
|
|
build --experimental_ui_max_stdouterr_bytes=2048576
|
2021-08-03 14:57:43 +00:00
|
|
|
|
|
|
|
|
# Configuration to disable tty features for environments like CI
|
|
|
|
|
build:no-tty --curses no
|
|
|
|
|
build:no-tty --progress_report_interval 10
|
|
|
|
|
build:no-tty --show_progress_rate_limit 10
|
[bazel] GPU-support: add @local_config_cuda and @cuda (#63604)
Summary:
## Context
We take the first step at tackling the GPU-bazel support by adding bazel external workspaces `local_config_cuda` and `cuda`, where the first one has some hardcoded values and lists of files, and the second one provides a nicer, high-level wrapper that maps into the already expected by pytorch bazel targets that are guarded with `if_cuda` macro.
The prefix `local_config_` signifies the fact that we are breaking the bazel hermeticity philosophy by explicitly relaying on the CUDA installation that is present on the machine.
## Testing
Notice an important scenario that is unlocked by this change: compilation of cpp code that depends on cuda libraries (i.e. cuda.h and so on).
Before:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
ERROR: /home/sergei.vorobev/src/pytorch4/tools/config/BUILD:12:1: no such package 'tools/toolchain': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/sergei.vorobev/src/pytorch4/tools/toolchain and referenced by '//tools/config:cuda_enabled_and_capable'
ERROR: While resolving configuration keys for //:c10: Analysis failed
ERROR: Analysis of target '//:c10' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.259s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 2 targets configured)
```
After:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
INFO: Analyzed target //:c10 (6 packages loaded, 246 targets configured).
INFO: Found 1 target...
Target //:c10 up-to-date:
bazel-bin/libc10.lo
bazel-bin/libc10.so
INFO: Elapsed time: 0.617s, Critical Path: 0.04s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
```
The `//:c10` target is a good testing one for this, because it has such cases where the [glob is different](https://github.com/pytorch/pytorch/blob/075024b9a34904ec3ecdab3704c3bcaa329bdfea/BUILD.bazel#L76-L81), based on do we compile for CUDA or not.
## What is out of scope of this PR
This PR is a first in a series of providing the comprehensive GPU bazel build support. Namely, we don't tackle the [cu_library](https://github.com/pytorch/pytorch/blob/11a40ad915d4d3d8551588e303204810887fcf8d/tools/rules/cu.bzl#L2) implementation here. This would be a separate large chunk of work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63604
Reviewed By: soulitzer
Differential Revision: D30442083
Pulled By: malfet
fbshipit-source-id: b2a8e4f7e5a25a69b960a82d9e36ba568eb64595
2021-08-27 16:31:36 +00:00
|
|
|
|
2022-06-06 21:58:47 +00:00
|
|
|
# Build with GPU support by default.
|
|
|
|
|
build --define=cuda=true
|
|
|
|
|
# rules_cuda configuration
|
|
|
|
|
build --@rules_cuda//cuda:enable_cuda
|
|
|
|
|
build --@rules_cuda//cuda:cuda_targets=sm_52
|
|
|
|
|
build --@rules_cuda//cuda:compiler=nvcc
|
|
|
|
|
build --repo_env=CUDA_PATH=/usr/local/cuda
|
|
|
|
|
|
|
|
|
|
# Configuration to build without GPU support
|
|
|
|
|
build:cpu-only --define=cuda=false
|
[bazel] GPU-support: add @local_config_cuda and @cuda (#63604)
Summary:
## Context
We take the first step at tackling the GPU-bazel support by adding bazel external workspaces `local_config_cuda` and `cuda`, where the first one has some hardcoded values and lists of files, and the second one provides a nicer, high-level wrapper that maps into the already expected by pytorch bazel targets that are guarded with `if_cuda` macro.
The prefix `local_config_` signifies the fact that we are breaking the bazel hermeticity philosophy by explicitly relaying on the CUDA installation that is present on the machine.
## Testing
Notice an important scenario that is unlocked by this change: compilation of cpp code that depends on cuda libraries (i.e. cuda.h and so on).
Before:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
ERROR: /home/sergei.vorobev/src/pytorch4/tools/config/BUILD:12:1: no such package 'tools/toolchain': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/sergei.vorobev/src/pytorch4/tools/toolchain and referenced by '//tools/config:cuda_enabled_and_capable'
ERROR: While resolving configuration keys for //:c10: Analysis failed
ERROR: Analysis of target '//:c10' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.259s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 2 targets configured)
```
After:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
INFO: Analyzed target //:c10 (6 packages loaded, 246 targets configured).
INFO: Found 1 target...
Target //:c10 up-to-date:
bazel-bin/libc10.lo
bazel-bin/libc10.so
INFO: Elapsed time: 0.617s, Critical Path: 0.04s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
```
The `//:c10` target is a good testing one for this, because it has such cases where the [glob is different](https://github.com/pytorch/pytorch/blob/075024b9a34904ec3ecdab3704c3bcaa329bdfea/BUILD.bazel#L76-L81), based on do we compile for CUDA or not.
## What is out of scope of this PR
This PR is a first in a series of providing the comprehensive GPU bazel build support. Namely, we don't tackle the [cu_library](https://github.com/pytorch/pytorch/blob/11a40ad915d4d3d8551588e303204810887fcf8d/tools/rules/cu.bzl#L2) implementation here. This would be a separate large chunk of work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63604
Reviewed By: soulitzer
Differential Revision: D30442083
Pulled By: malfet
fbshipit-source-id: b2a8e4f7e5a25a69b960a82d9e36ba568eb64595
2021-08-27 16:31:36 +00:00
|
|
|
# define a separate build folder for faster switching between configs
|
2022-06-06 21:58:47 +00:00
|
|
|
build:cpu-only --platform_suffix=-cpu-only
|
2022-01-26 11:41:33 +00:00
|
|
|
# See the note on the config-less build for details about why we are
|
2022-06-06 21:58:47 +00:00
|
|
|
# doing this. We must also do it for the "-cpu-only" platform suffix.
|
|
|
|
|
build --copt=-isystem --copt=bazel-out/k8-fastbuild-cpu-only/bin
|
2021-12-17 21:41:24 +00:00
|
|
|
# rules_cuda configuration
|
2022-06-06 21:58:47 +00:00
|
|
|
build:cpu-only --@rules_cuda//cuda:enable_cuda=False
|
2022-06-09 20:15:50 +00:00
|
|
|
|
2022-06-25 02:00:38 +00:00
|
|
|
# Definition of --config=shell
|
|
|
|
|
# interactive shell immediately before execution
|
|
|
|
|
build:shell --run_under="//tools/bazel_tools:shellwrap"
|
|
|
|
|
|
2022-06-13 13:26:05 +00:00
|
|
|
# Disable all warnings for external repositories. We don't care about
|
|
|
|
|
# their warnings.
|
|
|
|
|
build --per_file_copt=^external/@-w
|
|
|
|
|
|
2022-06-09 20:15:50 +00:00
|
|
|
# Set additional warnings to error level.
|
|
|
|
|
#
|
|
|
|
|
# Implementation notes:
|
|
|
|
|
# * we use file extensions to determine if we are using the C++
|
|
|
|
|
# compiler or the cuda compiler
|
|
|
|
|
# * we use ^// at the start of the regex to only permit matching
|
|
|
|
|
# PyTorch files. This excludes external repos.
|
|
|
|
|
#
|
|
|
|
|
# Note that because this is logically a command-line flag, it is
|
|
|
|
|
# considered the word on what warnings are enabled. This has the
|
|
|
|
|
# unfortunate consequence of preventing us from disabling an error at
|
|
|
|
|
# the target level because those flags will come before these flags in
|
|
|
|
|
# the action invocation. Instead we provide per-file exceptions after
|
|
|
|
|
# this.
|
|
|
|
|
#
|
|
|
|
|
# On the bright side, this means we don't have to more broadly apply
|
|
|
|
|
# the exceptions to an entire target.
|
2022-06-13 13:21:22 +00:00
|
|
|
#
|
|
|
|
|
# Looking for CUDA flags? We have a cu_library macro that we can edit
|
|
|
|
|
# directly. Look in //tools/rules:cu.bzl for details. Editing the
|
|
|
|
|
# macro over this has the following advantages:
|
|
|
|
|
# * making changes does not require discarding the Bazel analysis
|
|
|
|
|
# cache
|
|
|
|
|
# * it allows for selective overrides on individual targets since the
|
|
|
|
|
# macro-level opts will come earlier than target level overrides
|
2022-06-13 13:26:04 +00:00
|
|
|
|
|
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Werror=all
|
|
|
|
|
# The following warnings come from -Wall. We downgrade them from error
|
|
|
|
|
# to warnings here.
|
|
|
|
|
#
|
|
|
|
|
# We intentionally use #pragma unroll, which is compiler specific.
|
|
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Wno-error=unknown-pragmas
|
2022-06-09 20:15:50 +00:00
|
|
|
|
2022-06-13 13:26:05 +00:00
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Werror=extra
|
|
|
|
|
# The following warnings come from -Wextra. We downgrade them from error
|
|
|
|
|
# to warnings here.
|
|
|
|
|
#
|
|
|
|
|
# unused-parameter-compare has a tremendous amount of violations in the
|
|
|
|
|
# codebase. It will be a lot of work to fix them, just disable it for
|
|
|
|
|
# now.
|
|
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Wno-unused-parameter
|
|
|
|
|
# missing-field-parameters has both a large number of violations in
|
|
|
|
|
# the codebase, but it also is used pervasively in the Python C
|
|
|
|
|
# API. There are a couple of catches though:
|
|
|
|
|
# * we use multiple versions of the Python API and hence have
|
|
|
|
|
# potentially multiple different versions of each relevant
|
|
|
|
|
# struct. They may have different numbers of fields. It will be
|
|
|
|
|
# unwieldy to support multiple versions in the same source file.
|
|
|
|
|
# * Python itself for many of these structs recommends only
|
|
|
|
|
# initializing a subset of the fields. We should respect the API
|
|
|
|
|
# usage conventions of our dependencies.
|
|
|
|
|
#
|
|
|
|
|
# Hence, we just disable this warning altogether. We may want to clean
|
|
|
|
|
# up some of the clear-cut cases that could be risky, but we still
|
|
|
|
|
# likely want to have this disabled for the most part.
|
|
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Wno-missing-field-initializers
|
|
|
|
|
|
2023-05-12 19:43:56 +00:00
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Wno-unused-function
|
|
|
|
|
build --per_file_copt='^//.*\.(cpp|cc)$'@-Wno-unused-variable
|
|
|
|
|
|
2022-06-13 13:21:22 +00:00
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterCompositeExplicitAutograd\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterCompositeImplicitAutograd\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterMkldnnCPU\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterNestedTensorCPU\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterQuantizedCPU\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterSparseCPU\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterSparseCsrCPU\.cpp$'@-Wno-error=unused-function
|
2022-07-20 18:17:24 +00:00
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterNestedTensorMeta\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterSparseMeta\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterQuantizedMeta\.cpp$'@-Wno-error=unused-function
|
2022-06-13 13:21:22 +00:00
|
|
|
build --per_file_copt='//:aten/src/ATen/RegisterZeroTensor\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:torch/csrc/lazy/generated/RegisterAutogradLazy\.cpp$'@-Wno-error=unused-function
|
|
|
|
|
build --per_file_copt='//:torch/csrc/lazy/generated/RegisterLazy\.cpp$'@-Wno-error=unused-function
|