pytorch/tools
Zain Rizvi 5dcee01c2b Monitor baseline for TD prioritizations (#110031)
For tests that TD prioritizes, we should track what their ordering _would have been_ if none of the TD heuristics had applied to it.

This is useful for two reasons:
1. It lets us better understand TD may have contributed to that test running sooner
2. it's possible that heuristics actually mark a test as less important than the default sorting would have claimed (the default sorts tests in a fixed order). This will let us track how often that happens
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110031
Approved by: https://github.com/clee2000
2023-09-26 04:27:16 +00:00
..
alerts
amd_build Remove redundant change for gloo (#106750) 2023-09-26 03:46:14 +00:00
autograd Avoid saving self for mean.backward (#109935) 2023-09-23 11:50:54 +00:00
bazel_tools
build/bazel
build_defs
code_analyzer [gen_operators_yaml] add arguments to control include_all_overloads (#108396) 2023-09-02 17:37:36 +00:00
code_coverage
config
coverage_plugins_package
dynamo Move has_triton to top level triton utils so that dynamo can also access (#109832) 2023-09-22 19:33:41 +00:00
gdb
github
iwyu Revert "[1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)" 2023-09-25 20:05:25 +00:00
jit
linter [lintrunner] Capture mypy internal error (#109421) 2023-09-18 15:48:14 +00:00
lite_interpreter removing some redundant str splits (#106089) 2023-09-01 00:22:58 +00:00
lldb
onnx
pyi Update type hint for Tensor.__getitem__. (#109531) 2023-09-21 18:19:38 +00:00
rules
rules_cc
setup_helpers
shared
stats Add PR number to metrics when available (#109406) 2023-09-25 19:57:34 +00:00
test Add PR number to metrics when available (#109406) 2023-09-25 19:57:34 +00:00
testing Monitor baseline for TD prioritizations (#110031) 2023-09-26 04:27:16 +00:00
__init__.py
bazel.bzl
BUCK.bzl Use global variables to register the return_types namedtuples (#108832) 2023-09-13 17:42:46 +00:00
BUCK.oss
build_libtorch.py
build_pytorch_libs.py
download_mnist.py
extract_scripts.py
gen_flatbuffers.sh
gen_vulkan_spv.py Add torchgen path in gen_vulkan_spy (#108980) 2023-09-16 04:09:56 +00:00
generate_torch_version.py
generated_dirs.txt
git_add_generated_dirs.sh
git_reset_generated_dirs.sh
nightly.py removing some redundant str splits (#106089) 2023-09-01 00:22:58 +00:00
nvcc_fix_deps.py
pytorch.version
README.md
render_junit.py
substitute.py
update_masked_docs.py
vscode_settings.py

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.

Developer tools which you might find useful:

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: