pytorch/benchmarks
Mingzhe Li 5e94e66c6f unify unary ops benchmark (#28913)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28913

as title

Test Plan:
```
buck run mode/opt //caffe2/benchmarks/operator_benchmark/pt:unary_test

# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: abs
# Mode: Eager
# Name: abs_M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 90.233

...

Reviewed By: hl475

Differential Revision: D18231641

fbshipit-source-id: 3093db47d0356b927768f15dc63af6ad8aadd430
2019-10-30 17:46:13 -07:00
..
fastrnns Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
framework_overhead_benchmark Added running via throughput benchmark options. (#23077) 2019-07-22 11:27:55 -07:00
operator_benchmark unify unary ops benchmark (#28913) 2019-10-30 17:46:13 -07:00
README.md

PyTorch Benchmarks

NOTE: This folder is currently work in progress.

This folder contains scripts that produce reproducible timings of various PyTorch features.

It also provides mechanisms to compare PyTorch with other frameworks.

Setup environment

Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:

# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch

# Install the latest pytorch master from source.
# It should supercede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop

# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"

Benchmark List

Please refer to each subfolder to discover each benchmark suite