pytorch/c10
Meghan 6ff4548b6e [AMP] Support XLA:TPU (#96370)
With https://github.com/pytorch/xla/pull/5148, https://github.com/pytorch/xla/pull/4740

With these changes
XLA:GPU users should use `torch.cuda.amp.autocast()` for AMP with float16
XLA:TPU users should use `torch.amp.autocast('xla')` for AMP with bfloat16

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96370
Approved by: https://github.com/bdhirsh, https://github.com/malfet
2023-06-23 19:46:42 +00:00
..
benchmark increase clang-tidy coverage to more c10 source files (#102902) 2023-06-04 06:33:01 +00:00
core [AMP] Support XLA:TPU (#96370) 2023-06-23 19:46:42 +00:00
cuda increase clang-tidy coverage to more c10 source files (#102902) 2023-06-04 06:33:01 +00:00
hip
macros run buildifier on unified build files (#98141) 2023-04-04 00:37:19 +00:00
mobile adjust header inclusions in C10 as sugguested by IWYU (#102467) 2023-05-31 19:19:10 +00:00
test Eliminate c10/util/array from PyTorch (#103893) 2023-06-22 01:33:31 +00:00
util Eliminate c10/util/array from PyTorch (#103893) 2023-06-22 01:33:31 +00:00
BUCK.oss remove no-op C10_DISABLE_NUMA preprocessor flag (#98243) 2023-04-06 20:38:10 +00:00
BUILD.bazel remove //c10:headers (#98420) 2023-04-05 19:33:10 +00:00
build.bzl run buildifier on unified build files (#98141) 2023-04-04 00:37:19 +00:00
CMakeLists.txt [T153220354] Fix header inclusions in c10 (#1541) (#101846) 2023-05-20 19:35:14 +00:00
ovrsource_defs.bzl remove no-op C10_DISABLE_NUMA preprocessor flag (#98243) 2023-04-06 20:38:10 +00:00