pytorch/c10
Natalia Gimelshein 08ff11e9d0 initialize device when pinning memory on this device, short circuit i… (#145752)
…s_pinned if device is not initialized
Do not land
RFC
potential fix for #144687

Now `.is_pinned(device="cuda")` does not initialize device and thus doesn't poison the fork (but it complains about `device` arg being deprecated). To not need `device=` arg we'd need to fix get_accelerator to not initialize device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145752
Approved by: https://github.com/albanD

Co-authored-by: albanD <albandes@fb.com>
2025-01-30 21:37:29 +00:00
..
benchmark
core Use std::string_view (#145906) 2025-01-30 03:14:27 +00:00
cuda Let PYTORCH_NO_CUDA_MEMORY_CACHING has effect only when value is 1 (#145905) 2025-01-30 05:11:10 +00:00
hip
macros [ROCm][Windows] Fix export macros (#144098) 2025-01-04 17:12:46 +00:00
metal [MPS] Add op_math_t (#145808) 2025-01-28 18:03:52 +00:00
mobile
test Fix cppcoreguidelines-init-variables ignorance (#141795) 2025-01-28 17:11:37 +00:00
util initialize device when pinning memory on this device, short circuit i… (#145752) 2025-01-30 21:37:29 +00:00
xpu Filter out iGPU if dGPU is found on XPU (#144378) 2025-01-29 15:53:16 +00:00
BUCK.oss
BUILD.bazel
build.bzl
CMakeLists.txt
ovrsource_defs.bzl