mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53143 Meta is now an honest to goodness device type, like cpu, so you can use device='meta' to trigger allocation of meta tensors. This way better than empty_meta since we now have working API for most factory functions (they don't necessarily work yet, though, because need to register Meta versions of those functions.) Some subtleties: - I decided to drop the concept of CPU versus CUDA meta tensors; meta tensors are device agnostic. It's hard to say exactly what the correct level of abstraction here is, but in this particular case implementation considerations trump semantic considerations: it is way easier to have just a meta device, than to have a meta device AND a cpu device AND a cuda device. This may limit the applicability of meta tensors for tracing models that do explicit cpu()/cuda() conversions (unless, perhaps, we make those operations no-ops on meta tensors). - I noticed that the DeviceType uppercase strings are kind of weird. Are they really supposed to be all caps? That's weird. - I moved the Meta dispatch key to live with the rest of the "device" dispatch keys. - I intentionally did NOT add a Backend for Meta. For now, I'm going to hope meta tensors never exercise any of the Backend conversion code; even if it does, better to fix the code to just stop converting to and from Backend. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Reviewed By: samestep Differential Revision: D26763552 Pulled By: ezyang fbshipit-source-id: 14633b6ca738e60b921db66a763155d01795480d |
||
|---|---|---|
| .. | ||
| amd_build | ||
| autograd | ||
| clang_format_hash | ||
| code_analyzer | ||
| code_coverage | ||
| codegen | ||
| config | ||
| docker | ||
| fast_nvcc | ||
| jit | ||
| pyi | ||
| rules | ||
| setup_helpers | ||
| shared | ||
| __init__.py | ||
| build_libtorch.py | ||
| build_pytorch_libs.py | ||
| build_variables.bzl | ||
| clang_format_all.py | ||
| clang_format_ci.sh | ||
| clang_format_utils.py | ||
| clang_tidy.py | ||
| download_mnist.py | ||
| flake8_hook.py | ||
| generate_torch_version.py | ||
| generated_dirs.txt | ||
| git-clang-format | ||
| git-pre-commit | ||
| git_add_generated_dirs.sh | ||
| git_reset_generated_dirs.sh | ||
| nightly.py | ||
| pytorch.version | ||
| README.md | ||
| test_history.py | ||
| update_disabled_tests.sh | ||
This folder contains a number of scripts which are used as
part of the PyTorch build process. This directory also doubles
as a Python module hierarchy (thus the __init__.py).
Overview
Modern infrastructure:
- autograd - Code generation for autograd. This includes definitions of all our derivatives.
- jit - Code generation for JIT
- shared - Generic infrastructure that scripts in
tools may find useful.
- module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.
Legacy infrastructure (we should kill this):
- cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.
Build system pieces:
- setup_helpers - Helper code for searching for third-party dependencies on the user system.
- build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
- build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.
- fast_nvcc - Mostly-transparent wrapper over nvcc that
parallelizes compilation when used to build CUDA files for multiple
architectures at once.
- fast_nvcc.py - Python script, entrypoint to the fast nvcc wrapper.
Developer tools which you might find useful:
- clang_tidy.py - Script for running clang-tidy on lines of your script which you changed.
- git_add_generated_dirs.sh and git_reset_generated_dirs.sh - Use this to force add generated files to your Git index, so that you can conveniently run diffs on them when working on code-generation. (See also generated_dirs.txt which specifies the list of directories with generated files.)
- test_history.py - Query S3 to display history of a single test across multiple jobs over time.
Important if you want to run on AMD GPU:
- amd_build - HIPify scripts, for transpiling CUDA
into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to
do this transpilation, but have separate entry-points for transpiling
either PyTorch or Caffe2 code.
- build_amd.py - Top-level entry point for HIPifying our codebase.
Tools which are only situationally useful:
- docker - Dockerfile for running (but not developing) PyTorch, using the official conda binary distribution. Context: https://github.com/pytorch/pytorch/issues/1619
- download_mnist.py - Download the MNIST dataset; this is necessary if you want to run the C++ API tests.
- run-clang-tidy-in-ci.sh - Responsible for checking that C++ code is clang-tidy clean in CI on Travis