pytorch/tools
Supriya Rao c112e89cc6 [quant] Make choose_qparams_optimized return Tensors to preserve dtype (#45530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45530

Returning double values requires special handling as a return type for aten functions.
Instead return tensors where the type is preserved in the tensor dtype

Test Plan:
python test/test_quantization.py TestQuantizedTensor.test_choose_qparams_optimized

Imported from OSS

Reviewed By: dskhudia

Differential Revision: D24001134

fbshipit-source-id: bec6b17242f4740ab5674be06e0fc30c35eb0379
2020-09-30 11:35:23 -07:00
..
amd_build Remove __future__ imports for legacy Python2 supports (#45033) 2020-09-23 17:57:02 -07:00
autograd [quant] Make choose_qparams_optimized return Tensors to preserve dtype (#45530) 2020-09-30 11:35:23 -07:00
clang_format_hash
code_analyzer replaced whitelist with allowlist (#45260) 2020-09-29 00:27:46 -07:00
code_coverage add lcov to oss for beautiful html report (#44568) 2020-09-11 15:29:24 -07:00
codegen Move xla codegen to aten. (#45241) 2020-09-25 18:07:32 -07:00
config
docker
jit Add optional string support to native_functions schema (#43010) 2020-09-18 14:57:24 -07:00
pyi Enable type-checking of torch.nn.quantized.* modules (#43110) 2020-09-29 18:14:29 -07:00
rules
setup_helpers Remove __future__ imports for legacy Python2 supports (#45033) 2020-09-23 17:57:02 -07:00
shared Rewrite of ATen code generator (#42629) 2020-08-31 09:00:22 -07:00
__init__.py
aten_mirror.sh
build_libtorch.py Remove Incorrect Comment in tools/build_libtorch and remove Python2 support in the module import (#44888) 2020-09-18 10:03:36 -07:00
build_pytorch_libs.py
build_variables.bzl [TensorExpr] Consolidate {buffer,function,tensor}.{h.cpp} in tensor.{h,cpp}. (#45388) 2020-09-29 01:17:10 -07:00
clang_format_all.py
clang_format_ci.sh
clang_format_utils.py
clang_tidy.py Remove __future__ imports for legacy Python2 supports (#45033) 2020-09-23 17:57:02 -07:00
download_mnist.py Remove Incorrect Comment in tools/build_libtorch and remove Python2 support in the module import (#44888) 2020-09-18 10:03:36 -07:00
flake8_hook.py
generate_torch_version.py Move torch/version.py generation to cmake (#44577) 2020-09-16 15:49:22 -07:00
generated_dirs.txt
git-clang-format
git-pre-commit [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628) 2020-08-30 18:35:46 -07:00
git_add_generated_dirs.sh
git_reset_generated_dirs.sh
nightly.py nightly robustness fixes for linking across devices (#43771) 2020-09-21 12:32:32 -07:00
pytorch.version
README.md
update_disabled_tests.sh

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Legacy infrastructure (we should kill this):

  • cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.

Developer tools which you might find useful:

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: