mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
With https://github.com/pytorch/xla/pull/5148, https://github.com/pytorch/xla/pull/4740 With these changes XLA:GPU users should use `torch.cuda.amp.autocast()` for AMP with float16 XLA:TPU users should use `torch.amp.autocast('xla')` for AMP with bfloat16 Pull Request resolved: https://github.com/pytorch/pytorch/pull/96370 Approved by: https://github.com/bdhirsh, https://github.com/malfet |
||
|---|---|---|
| .. | ||
| benchmark | ||
| core | ||
| cuda | ||
| hip | ||
| macros | ||
| mobile | ||
| test | ||
| util | ||
| BUCK.oss | ||
| BUILD.bazel | ||
| build.bzl | ||
| CMakeLists.txt | ||
| ovrsource_defs.bzl | ||