pytorch/tools
peterjc123 ebed008dd4 Correct /MP usage in MSVC (#33120)
Summary:
## Several flags
`/MP[M]`: It is a flag for the compiler `cl`. It leads to object-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC.
`/maxcpucount:[M]`: It is a flag for the generator `msbuild`. It leads to project-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC.
`/p:CL_MPCount=[M]`: It is a flag for the generator `msbuild`. It leads the generator to pass `/MP[M]` to the compiler.
`/j[M]`: It is a flag for the generator `ninja`. It leads to object-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC.

## Reason for the change
1. Object-level multiprocessing is preferred over project-level multiprocessing.
2. ~For ninja, we don't need to set `/MP` otherwise M * M processes will be spawned.~ Actually, it is not correct because in ninja configs, there are only one source file in the command. Therefore, the `/MP` switch should be useless.
3. For msbuild, if it is called through Python configuration scripts, then `/p:CL_MPCount=[M]` will be added, otherwise, we add `/MP` to `CMAKE_CXX_FLAGS`.
4. ~It may be a possible fix for https://github.com/pytorch/pytorch/issues/28271, https://github.com/pytorch/pytorch/issues/27463 and https://github.com/pytorch/pytorch/issues/25393. Because `/MP` is also passed to `nvcc`.~ It is probably not true. Because `/MP` should not be effective given there is only one source file per command.

## Reference
1. https://docs.microsoft.com/en-us/cpp/build/reference/mp-build-with-multiple-processes?view=vs-2019
2. https://github.com/Microsoft/checkedc-clang/wiki/Parallel-builds-of-clang-on-Windows
3. https://blog.kitware.com/cmake-building-with-all-your-cores/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33120

Differential Revision: D19817227

Pulled By: ezyang

fbshipit-source-id: f8d01f835016971729c7a8d8a0d1cb8a8c2c6a5f
2020-02-10 11:29:25 -08:00
..
amd_build Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
autograd Backward operation of torch.eig for real eigenvalues (#33090) 2020-02-10 09:52:56 -08:00
code_analyzer [pytorch] change op dependency output to use double-quoted strings (#32464) 2020-01-24 15:27:28 -08:00
docker
jit Fix/simplify alias annotation handling in op codegen. (#32574) 2020-01-30 00:31:03 -08:00
pyi add missing method annotations to torch.Tensor (#30576) 2020-02-03 09:59:14 -08:00
setup_helpers Correct /MP usage in MSVC (#33120) 2020-02-10 11:29:25 -08:00
shared Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
__init__.py
aten_mirror.sh
build_libtorch.py
build_pytorch_libs.py Remove tools/setup_helpers/cuda.py. (#28617) 2019-11-06 07:12:01 -08:00
build_variables.py Remove Python dependency (toPyTuple/fromPyTuple, jitCompilationUnit, deserialize) in rref_impl.h/cpp (#32753) 2020-01-30 17:52:48 -08:00
clang_format.py Enable EXE001 flake8 check. (#27560) 2019-10-09 09:15:29 -07:00
clang_tidy.py Fix typos (#30606) 2019-12-02 20:17:42 -08:00
download_mnist.py
flake8_hook.py
generated_dirs.txt
git-pre-commit
git_add_generated_dirs.sh
git_reset_generated_dirs.sh
pytorch.version
README.md no more build_pytorch_libs.sh/.bat (#32319) 2020-01-23 14:45:54 -08:00
update_disabled_tests.sh we should have a config-based way to skip flaky tests (#30978) 2019-12-17 11:58:43 -08:00

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Legacy infrastructure (we should kill this):

  • cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.

Developer tools which you might find useful:

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: