pytorch/tools
Will Feng a54416d208 [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508)
Summary:
This PR is BC-breaking in the following way:
- The deprecated `torch::nn::BatchNorm` is removed in favor of `torch::nn::BatchNorm{1,2,3}d`
- The deprecated `torch::nn::FeatureDropout` is removed in favor of `torch::nn::Dropout{2,3}d`
- The deprecated `torch::nn::modules_ordered_dict` is removed. User should do `Sequential sequential({{"m1", MyModule(1)}, {"m2", MyModule(2)}})` instead.
- The deprecated `torch::nn::init::Nonlinearity` is removed, in favor of the following enums:
    - `torch::kLinear`
    - `torch::kConv1D`
    - `torch::kConv2D`
    - `torch::kConv3D`
    - `torch::kConvTranspose1D`
    - `torch::kConvTranspose2D`
    - `torch::kConvTranspose3D`
    - `torch::kSigmoid`
    - `torch::kTanh`
    - `torch::kReLU`
    - `torch::kLeakyReLU`
- The deprecated `torch::nn::init::FanMode` is removed, in favor of the following enums:
    - `torch::kFanIn`
    - `torch::kFanOut`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34508

Differential Revision: D20351601

Pulled By: yf225

fbshipit-source-id: cca0cd112f29a31bb023e348ca8f82780e42bea3
2020-03-12 10:09:58 -07:00
..
amd_build Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
autograd Replace THPLayout with at::Layout in Python Argument Parser (#34543) (#34584) 2020-03-12 07:19:00 -07:00
code_analyzer [pytorch][mobile] support for custom mobile build with dynamic dispatch (#34055) 2020-03-03 19:25:16 -08:00
docker
jit Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema. (#34588) 2020-03-11 20:59:46 -07:00
pyi Fix doc and type hints for "torch.add"; fix deprecated python calls in tests (#33935) 2020-03-06 15:53:58 -08:00
setup_helpers Fix Lint (#34218) 2020-03-04 09:48:57 -08:00
shared Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
__init__.py
aten_mirror.sh
build_libtorch.py
build_pytorch_libs.py Remove tools/setup_helpers/cuda.py. (#28617) 2019-11-06 07:12:01 -08:00
build_variables.bzl [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
clang_format.py Enable EXE001 flake8 check. (#27560) 2019-10-09 09:15:29 -07:00
clang_tidy.py Fix typos (#30606) 2019-12-02 20:17:42 -08:00
download_mnist.py Make use of our S3 mirror if Yann Lecunn's website is not accessible (#34215) 2020-03-04 11:35:34 -08:00
flake8_hook.py
generated_dirs.txt
git-pre-commit [jit] do the code reorg (#33851) 2020-02-27 13:02:51 -08:00
git_add_generated_dirs.sh
git_reset_generated_dirs.sh
pytorch.version
README.md no more build_pytorch_libs.sh/.bat (#32319) 2020-01-23 14:45:54 -08:00
update_disabled_tests.sh we should have a config-based way to skip flaky tests (#30978) 2019-12-17 11:58:43 -08:00

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Legacy infrastructure (we should kill this):

  • cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.

Developer tools which you might find useful:

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: