pytorch/.github/ci_commit_pins
Simon Fan 54c5f474a7 Forward rank and world size info to Torchbench models when using dynamo runner (#108438)
Adding support to pass rank and world_size to torchbench model, via its extra_args parameter: https://github.com/pytorch/benchmark/blob/main/torchbenchmark/util/model.py#L83C80-L83C90

This is used for models which distribute over multiple GPUs e.g. simple_gpt https://github.com/pytorch/benchmark/pull/1867

Also add an option to skip multiprocess only gpu models

Testing via `python benchmarks/dynamo/torchbench.py -d cuda --output=benchmark_logs/performance.csv --inference --performance --timing --print-memory --multiprocess --only simple_gpt`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108438
Approved by: https://github.com/Chillee
2023-09-14 21:01:20 +00:00
..
audio.txt
data.txt
fbgemm.txt Install torchrec/fbgemm from source in CI (#106808) 2023-08-12 02:08:44 +00:00
multipy.txt upgrade multipy to latest master there (#105344) 2023-07-17 22:15:03 +00:00
numpy_pytorch_interop.txt [dynamo][numpy] Install numpy_pytorch_interop in ci jobs (#103447) 2023-06-13 01:14:19 +00:00
text.txt
torchbench.txt Forward rank and world size info to Torchbench models when using dynamo runner (#108438) 2023-09-14 21:01:20 +00:00
torchrec.txt Install torchrec/fbgemm from source in CI (#106808) 2023-08-12 02:08:44 +00:00
triton.txt
vision.txt [vision hash update] update the pinned vision hash (#108818) 2023-09-08 04:04:06 +00:00
xla.txt Minor fixs to make torchbench runable on torch/xla (#107919) 2023-09-06 22:35:53 +00:00