pytorch/benchmarks
2023-11-05 03:10:30 +00:00
..
cpp
distributed
dynamo [dynamo] Refactor handling of state in context managers (#112939) 2023-11-05 03:10:30 +00:00
fastrnns
framework_overhead_benchmark
functional_autograd_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
fuser
inference Tabulate outputs in inference benchmark (#112900) 2023-11-03 23:53:30 +00:00
instruction_counts Fix some typos, mostly "that that" (#106901) 2023-08-10 19:46:53 +00:00
nested
operator_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
overrides_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
profiler_benchmark
record_function_benchmark
serialization
sparse Use weakref in storing tensors as keys (follow-up to #111470) (#112076) 2023-10-30 19:16:05 +00:00
static_runtime
tensorexpr
transformer
compare-fastrnn-results.py
compare.sh
README.md
upload_scribe.py

PyTorch Benchmarks

This folder contains scripts that produce reproducible timings of various PyTorch features.

It also provides mechanisms to compare PyTorch with other frameworks.

Setup environment

Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:

# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch

# Install the latest pytorch master from source.
# It should supersede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop

# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"

Benchmark List

Please refer to each subfolder to discover each benchmark suite. Links are provided where descriptions exist: