mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: * improve docker packages (install OpenBLAS to have at-compile-time LAPACK functionality w/ optimizations for both Intel and AMD CPUs) * integrate rocFFT (i.e., enable Fourier functionality) * fix bugs in ROCm caused by wrong warp size * enable more test sets, skip the tests that don't work on ROCm yet * don't disable asserts any longer in hipification * small improvements Pull Request resolved: https://github.com/pytorch/pytorch/pull/10893 Differential Revision: D9615053 Pulled By: ezyang fbshipit-source-id: 864b4d27bf089421f7dfd8065e5017f9ea2f7b3b |
||
|---|---|---|
| .. | ||
| jenkins | ||
| ubuntu-14.04-cpu-all-options | ||
| ubuntu-14.04-cpu-minimal | ||
| ubuntu-16.04-cpu-all-options | ||
| ubuntu-16.04-cpu-minimal | ||
| ubuntu-16.04-cuda8-cudnn6-all-options | ||
| ubuntu-16.04-cuda8-cudnn7-all-options | ||
| ubuntu-16.04-gpu-tutorial | ||
| readme.md | ||
Docker & Caffe2
Note: use nvidia-docker to run all GPU builds.
To get the latest source, rerun the docker builds using the Dockerfiles.
Docker images at https://hub.docker.com/r/caffe2ai/caffe2/ are a few months old, but will be refreshed soon.
Build like: docker build -t caffe2:cuda8-cudnn6-all-options .
Run like: nvidia-docker run --rm -it caffe2:cuda8-cudnn6-all-options python -m caffe2.python.operator_test.relu_op_test
For Docker on USB related instructions you can find some help on the gh-pages branch here