mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27010 Setting OMP_NUM_THREADS programmatically doesn't do the right thing because initialization is already done. Fixing this by calling torch.set_num_threads explicitly. Passing --omp_num_threads works as expected now. In dir benchmarks/operator_benchmark/ python -m pt.qconv_test --tag_filter resnext101_32x4 --wipe_cache --test_name QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 --omp_num_threads 1 ``` # ---------------------------------------- # PyTorch/Caffe2 Operator Micro-benchmarks # ---------------------------------------- # Tag : None # Benchmarking PyTorch: QConv2d # Mode: Eager # Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 # Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0 Forward Execution Time (us) : 509.965 # Benchmarking PyTorch: QConv2d # Mode: Eager # Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 # Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0 Forward Execution Time (us) : 576.007 ``` python -m pt.qconv_test --tag_filter resnext101_32x4 --wipe_cache --test_name QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 --omp_num_threads 4 ``` # ---------------------------------------- # PyTorch/Caffe2 Operator Micro-benchmarks # ---------------------------------------- # Tag : None # Benchmarking PyTorch: QConv2d # Mode: Eager # Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 # Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0 Forward Execution Time (us) : 195.002 # Benchmarking PyTorch: QConv2d # Mode: Eager # Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0 # Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0 Forward Execution Time (us) : 189.788 ``` ghstack-source-id: 91050434 Test Plan: See summary Differential Revision: D17647391 fbshipit-source-id: e00de1151902291ed94fd34446995ea1f0199d14 |
||
|---|---|---|
| .. | ||
| fastrnns | ||
| framework_overhead_benchmark | ||
| operator_benchmark | ||
| README.md | ||
PyTorch Benchmarks
NOTE: This folder is currently work in progress.
This folder contains scripts that produce reproducible timings of various PyTorch features.
It also provides mechanisms to compare PyTorch with other frameworks.
Setup environment
Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:
# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch
# Install the latest pytorch master from source.
# It should supercede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop
# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"
Benchmark List
Please refer to each subfolder to discover each benchmark suite