pytorch/benchmarks
jjsjann123 b21a6ff639 [NVFuser] Upstream push 0811 (#83239)
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. double support in expression evaluator
- bug fixes:
  1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784)
  2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard
- scheduler:
  1. manual transpose schedule example
  2. WIP transpose scheduler

Commits that's in this PR from the devel branch:

```
b7435afcd22c917713c2f41a7237bc26e1183f14 Transpose scheduler, step 1 (#1854)
8a45dbf72034684eb8e18b1835b533e90b68f184 Add an example on how to manually schedule transpose (#1889)
83dbf56a9554b2efbd5416461d938fff477b0b27 Patch dropout fix (#1898)
69d3519a532250719b1aa8341b50e067b181b42d Expand+Reduction, Expand+View support, rework View analysis and guards (#1883)
15091c488e96343bdc49e3990acbf238a3b3da51 Rework RNG to correctly support broadcasted dropout (#1888)
aafe2d048aaac596e503596a41303423619f3954 Make ExpressionEvaluator support Double (#1885)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38657074](https://our.internmc.facebook.com/intern/diff/D38657074)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83239
Approved by: https://github.com/davidberard98
2022-08-25 02:23:22 +00:00
..
cpp [NVFuser] Upstream push 0811 (#83239) 2022-08-25 02:23:22 +00:00
distributed Fix use-dict-literal lint (#83718) 2022-08-24 00:26:46 +00:00
fastrnns [libkineto] Re-enable user-annotations in PyTorch (#75601) 2022-04-26 23:54:22 +00:00
framework_overhead_benchmark
functional_autograd_benchmark Added functorch to functional_autograd_benchmark 2022-04-22 14:04:26 +00:00
fuser
instruction_counts [lint] upgrade mypy to latest version 2022-05-03 20:51:34 +00:00
operator_benchmark Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)" 2022-08-22 07:32:37 +00:00
overrides_benchmark
profiler_benchmark
record_function_benchmark
serialization
sparse
static_runtime [Static Runtime] Add schema checks for aten::list (#83753) 2022-08-22 13:42:47 +00:00
tensorexpr Fix some typos. 2022-04-11 21:55:59 +00:00
compare-fastrnn-results.py
compare.sh
README.md
upload_scribe.py

PyTorch Benchmarks

This folder contains scripts that produce reproducible timings of various PyTorch features.

It also provides mechanisms to compare PyTorch with other frameworks.

Setup environment

Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:

# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch

# Install the latest pytorch master from source.
# It should supersede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop

# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"

Benchmark List

Please refer to each subfolder to discover each benchmark suite