Add README to tools, delete defunct scripts. (#13621)

Summary:
Some extra documentation for other bits too.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13621

Differential Revision: D12943416

Pulled By: ezyang

fbshipit-source-id: c922995e420d38c2698ce59c5bf4ffa9eb68da83
This commit is contained in:
Edward Yang 2018-11-06 11:18:48 -08:00 committed by Facebook Github Bot
parent 6aee5488b5
commit 464dc31532
6 changed files with 101 additions and 82 deletions

View file

@ -129,12 +129,8 @@ and `python setup.py clean`. Then you can install in `build develop` mode again.
* [api](torch/csrc/api) - The PyTorch C++ frontend.
* [distributed](torch/csrc/distributed) - Distributed training
support for PyTorch.
* [tools](tools) - Code generation scripts for the PyTorch library
* [autograd](tools/autograd) - Code generation for autograd. This
includes definitions of all our derivatives.
* [jit](tools/jit) - Code generation for JIT
* [amd_build](tools/amd_build) - HIPify scripts, for transpiling CUDA
into AMD HIP.
* [tools](tools) - Code generation scripts for the PyTorch library.
See README of this directory for more details.
* [test](tests) - Python unit tests for PyTorch Python frontend
* [test_torch.py](test/test_torch.py) - Basic tests for PyTorch
functionality

72
tools/README.md Normal file
View file

@ -0,0 +1,72 @@
This folder contains a number of scripts which are used as
part of the PyTorch build process. This directory also doubles
as a Python module hierarchy (thus the `__init__.py`).
## Overview
Modern infrastructure:
* [autograd](tools/autograd) - Code generation for autograd. This
includes definitions of all our derivatives.
* [jit](tools/jit) - Code generation for JIT
* [shared](tools/shared) - Generic infrastructure that scripts in
tools may find useful.
* [module_loader.py](tools/shared/module_loader.py) - Makes it easier
to import arbitrary Python files in a script, without having to add
them to the PYTHONPATH first.
Legacy infrastructure (we should kill this):
* [nnwrap](tools/nnwrap) - Generates the THNN/THCUNN wrappers which make
legacy functionality available. (TODO: What exactly does this
implement?)
* [cwrap](tools/cwrap) - Implementation of legacy code generation for THNN/THCUNN.
This is used by nnwrap.
Build system pieces:
* [setup_helpers](tools/setup_helpers) - Helper code for searching for
third-party dependencies on the user system.
* [build_pytorch_libs.sh](tools/build_pytorch_libs.sh) - Script that
builds all of the constituent libraries of PyTorch, but not the
PyTorch Python extension itself. We are working on eliminating this
script in favor of a unified cmake build.
* [build_pytorch_libs.bat](tools/build_pytorch_libs.bat) - Same as
above, but for Windows.
* [build_libtorch.py](tools/build_libtorch.py) - Script for building
libtorch, a standalone C++ library without Python support. This
build script is tested in CI.
Developer tools which you might find useful:
* [clang_tidy.py](tools/clang_tidy.py) - Script for running clang-tidy
on lines of your script which you changed.
* [git_add_generated_dirs.sh](tools/git_add_generated_dirs.sh) and
[git_reset_generated_dirs.sh](tools/git_reset_generated_dirs.sh) -
Use this to force add generated files to your Git index, so that you
can conveniently run diffs on them when working on code-generation.
(See also [generated_dirs.txt](tools/generated_dirs.txt) which
specifies the list of directories with generated files.)
Important if you want to run on AMD GPU:
* [amd_build](tools/amd_build) - HIPify scripts, for transpiling CUDA
into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to
do this transpilation, but have separate entry-points for transpiling
either PyTorch or Caffe2 code.
* [build_caffe2_amd.py](tools/amd_build/build_caffe2_amd.py) - Script
for HIPifying the Caffe2 codebase.
* [build_pytorch_amd.py](tools/amd_build/build_pytorch_amd.py) - Script
for HIPifying the PyTorch codebase.
Tools which only situationally useful:
* [aten_mirror.sh](tools/aten_mirror.sh) - Mirroring script responsible
for keeping https://github.com/zdevito/ATen up-to-date.
* [docker](tools/docker) - Dockerfile for running (but not developing)
PyTorch, using the official conda binary distribution. Context:
https://github.com/pytorch/pytorch/issues/1619
* [download_mnist.py](tools/download_mnist.py) - Download the MNIST
dataset; this is necessary if you want to run the C++ API tests.
* [run-clang-tidy-in-ci.sh](tools/run-clang-tidy-in-ci.sh) - Responsible
for checking that C++ code is clang-tidy clean in CI on Travis

View file

@ -1,3 +1,16 @@
"""
To run this file by hand from the root of the PyTorch
repository, run:
python -m tools.autograd.gen_autograd \
build/aten/src/ATen/Declarations.yaml \
$OUTPUT_DIR
Where $OUTPUT_DIR is where you would like the files to be
generated. In the full build system, OUTPUT_DIR is
torch/csrc/autograd/generated/
"""
# gen_autograd.py generates C++ autograd functions and Python bindings.
#
# It delegates to the following scripts:

View file

@ -1,52 +0,0 @@
"Slightly adjust indentation
%s/^ / /g
" # -> len
%s/#\(\S*\) /len(\1)/g
" for loops
%s/for\( \)\{-\}\(\S*\)\( \)\{-\}=\( \)\{-\}\(\S*\),\( \)\{-\}\(\S*\)\( \)\{-\}do/for \2 in range(\5, \7+1)/g
" Change comments
%s/--\[\[/"""/g
%s/]]/"""/g
%s/--/#/g
" Add spacing between commas
%s/\(\S\),\(\S\)/\1, \2/g
%s/local //g
%s/ then/:/g
%s/ do/:/g
%s/end//g
%s/elseif/elif/g
%s/else/else:/g
%s/true/True/g
%s/false/False/g
%s/\~=/!=/g
%s/math\.min/min/g
%s/math\.max/max/g
%s/math\.abs/abs/g
%s/__init/__init__/g
" Rewrite function declarations
%s/function \w*:\(\w*\)/ def \1/g
%s/def \(.*\)$/def \1:/g
" class declaration
%s/\(\w*\), parent = torch\.class.*$/import torch\rfrom torch.legacy import nn\r\rclass \1(nn.Module):/g
%s/input\.THNN/self._backend/g
%s/\(self\.backend\w*$\)/\1\r self._backend.library_state,/g
%s/def \(\w*\)(/def \1(self, /g
%s/__init__(self)/__init__()/g
%s/:\(\S\)/.\1/g
%s/\.cdata()//g
%s/THNN\.optionalTensor(\(.*\))/\1/g
%s/parent\./super(##, self)./g

View file

@ -1,24 +0,0 @@
#!/bin/sh
# This script should be executed in pytorch root folder.
TEMP_DIR=tools/temp
set -ex
# Assumed to be run like tools/gen_onnx.sh
(cd torch/lib/nanopb/generator/proto && make)
# It always searches the same dir as the proto, so
# we have got to copy the option file over
mkdir -p $TEMP_DIR
cp torch/csrc/onnx/onnx.options $TEMP_DIR/onnx.options
wget https://raw.githubusercontent.com/onnx/onnx/master/onnx/onnx.proto -O $TEMP_DIR/onnx.proto
protoc --plugin=protoc-gen-nanopb=$PWD/torch/lib/nanopb/generator/protoc-gen-nanopb \
$TEMP_DIR/onnx.proto \
--nanopb_out=-T:.
# NB: -T suppresses timestamp. See https://github.com/nanopb/nanopb/issues/274
# nanopb generated C files are valid CPP! Yay!
cp $TEMP_DIR/onnx.pb.c torch/csrc/onnx/onnx.npb.cpp
sed -i s'/\(#include.*onnx\).pb.h/\1.npb.h/' torch/csrc/onnx/onnx.npb.cpp
cp $TEMP_DIR/onnx.pb.h torch/csrc/onnx/onnx.npb.h
rm -r $TEMP_DIR

View file

@ -1,3 +1,17 @@
"""
To run this file by hand from the root of the PyTorch
repository, run:
python -m tools.jit.gen_jit_dispatch \
build/aten/src/ATen/Declarations.yaml \
$OUTPUT_DIR \
tools/jit/templates
Where $OUTPUT_DIR is where you would like the files to be
generated. In the full build system, OUTPUT_DIR is
torch/csrc/jit/generated/
"""
import os
import argparse
import re