mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: ## Several flags `/MP[M]`: It is a flag for the compiler `cl`. It leads to object-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC. `/maxcpucount:[M]`: It is a flag for the generator `msbuild`. It leads to project-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC. `/p:CL_MPCount=[M]`: It is a flag for the generator `msbuild`. It leads the generator to pass `/MP[M]` to the compiler. `/j[M]`: It is a flag for the generator `ninja`. It leads to object-level multiprocessing. By default, it spawns M processes where M is the number of cores on the PC. ## Reason for the change 1. Object-level multiprocessing is preferred over project-level multiprocessing. 2. ~For ninja, we don't need to set `/MP` otherwise M * M processes will be spawned.~ Actually, it is not correct because in ninja configs, there are only one source file in the command. Therefore, the `/MP` switch should be useless. 3. For msbuild, if it is called through Python configuration scripts, then `/p:CL_MPCount=[M]` will be added, otherwise, we add `/MP` to `CMAKE_CXX_FLAGS`. 4. ~It may be a possible fix for https://github.com/pytorch/pytorch/issues/28271, https://github.com/pytorch/pytorch/issues/27463 and https://github.com/pytorch/pytorch/issues/25393. Because `/MP` is also passed to `nvcc`.~ It is probably not true. Because `/MP` should not be effective given there is only one source file per command. ## Reference 1. https://docs.microsoft.com/en-us/cpp/build/reference/mp-build-with-multiple-processes?view=vs-2019 2. https://github.com/Microsoft/checkedc-clang/wiki/Parallel-builds-of-clang-on-Windows 3. https://blog.kitware.com/cmake-building-with-all-your-cores/ Pull Request resolved: https://github.com/pytorch/pytorch/pull/33120 Differential Revision: D19817227 Pulled By: ezyang fbshipit-source-id: f8d01f835016971729c7a8d8a0d1cb8a8c2c6a5f |
||
|---|---|---|
| .. | ||
| amd_build | ||
| autograd | ||
| code_analyzer | ||
| docker | ||
| jit | ||
| pyi | ||
| setup_helpers | ||
| shared | ||
| __init__.py | ||
| aten_mirror.sh | ||
| build_libtorch.py | ||
| build_pytorch_libs.py | ||
| build_variables.py | ||
| clang_format.py | ||
| clang_tidy.py | ||
| download_mnist.py | ||
| flake8_hook.py | ||
| generated_dirs.txt | ||
| git-pre-commit | ||
| git_add_generated_dirs.sh | ||
| git_reset_generated_dirs.sh | ||
| pytorch.version | ||
| README.md | ||
| update_disabled_tests.sh | ||
This folder contains a number of scripts which are used as
part of the PyTorch build process. This directory also doubles
as a Python module hierarchy (thus the __init__.py).
Overview
Modern infrastructure:
- autograd - Code generation for autograd. This includes definitions of all our derivatives.
- jit - Code generation for JIT
- shared - Generic infrastructure that scripts in
tools may find useful.
- module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.
Legacy infrastructure (we should kill this):
- cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.
Build system pieces:
- setup_helpers - Helper code for searching for third-party dependencies on the user system.
- build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
- build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.
Developer tools which you might find useful:
- clang_tidy.py - Script for running clang-tidy on lines of your script which you changed.
- git_add_generated_dirs.sh and git_reset_generated_dirs.sh - Use this to force add generated files to your Git index, so that you can conveniently run diffs on them when working on code-generation. (See also generated_dirs.txt which specifies the list of directories with generated files.)
Important if you want to run on AMD GPU:
- amd_build - HIPify scripts, for transpiling CUDA
into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to
do this transpilation, but have separate entry-points for transpiling
either PyTorch or Caffe2 code.
- build_amd.py - Top-level entry point for HIPifying our codebase.
Tools which are only situationally useful:
- aten_mirror.sh - Mirroring script responsible for keeping https://github.com/zdevito/ATen up-to-date.
- docker - Dockerfile for running (but not developing) PyTorch, using the official conda binary distribution. Context: https://github.com/pytorch/pytorch/issues/1619
- download_mnist.py - Download the MNIST dataset; this is necessary if you want to run the C++ API tests.
- run-clang-tidy-in-ci.sh - Responsible for checking that C++ code is clang-tidy clean in CI on Travis