pytorch/c10/cuda
Benjamin Glass 4959784dac Add API query for available per-process CUDA memory (#140620)
Certain `cpp_wrapper`-enabled tests were OOM-ing in the CI pipeline, with error messages suggesting that sufficient memory was accessible.  This ultimately resulted from an internal memory limitation that was not queryable in the API.  This PR adds querying for that limit.

Additionally, the failing tests had incorrect memory availability checks, and are updated with measured memory requirements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140620
Approved by: https://github.com/malfet, https://github.com/eqy
ghstack dependencies: #141367
2024-12-03 00:24:03 +00:00
..
impl Introduce a device-agnostic runtime API design (#132204) 2024-10-27 10:37:09 +00:00
test
BUILD.bazel
build.bzl
CMakeLists.txt
CUDAAlgorithm.h
CUDAAllocatorConfig.cpp
CUDAAllocatorConfig.h
CUDACachingAllocator.cpp Add API query for available per-process CUDA memory (#140620) 2024-12-03 00:24:03 +00:00
CUDACachingAllocator.h Add API query for available per-process CUDA memory (#140620) 2024-12-03 00:24:03 +00:00
CUDADeviceAssertion.h
CUDADeviceAssertionHost.cpp
CUDADeviceAssertionHost.h
CUDAException.cpp
CUDAException.h
CUDAFunctions.cpp Turn some variables and functions into static (#136847) 2024-10-29 17:01:56 +00:00
CUDAFunctions.h
CUDAGraphsC10Utils.h [4/N] Fix cppcoreguidelines-special-member-functions warnings (#139027) 2024-10-29 00:18:18 +00:00
CUDAGuard.h Remove unneeded std::make_optional (#141567) 2024-11-28 00:05:21 +00:00
CUDAMacros.h
CUDAMallocAsyncAllocator.cpp Add API query for available per-process CUDA memory (#140620) 2024-12-03 00:24:03 +00:00
CUDAMathCompat.h
CUDAMiscFunctions.cpp
CUDAMiscFunctions.h
CUDAStream.cpp
CUDAStream.h
driver_api.cpp
driver_api.h [SymmetricMemory] introduce a binding for cuMemset32Async (#138755) 2024-11-05 18:47:24 +00:00
README.md

c10/cuda is a core library with CUDA functionality. It is distinguished from c10 in that it links against the CUDA library, but like c10 it doesn't contain any kernels, and consists solely of core functionality that is generally useful when writing CUDA code; for example, C++ wrappers for the CUDA C API.

Important notes for developers. If you want to add files or functionality to this folder, TAKE NOTE. The code in this folder is very special, because on our AMD GPU build, we transpile it into c10/hip to provide a ROCm environment. Thus, if you write:

// c10/cuda/CUDAFoo.h
namespace c10 { namespace cuda {

void my_func();

}}

this will get transpiled into:

// c10/hip/HIPFoo.h
namespace c10 { namespace hip {

void my_func();

}}

Thus, if you add new functionality to c10, you must also update C10_MAPPINGS torch/utils/hipify/cuda_to_hip_mappings.py to transpile occurrences of cuda::my_func to hip::my_func. (At the moment, we do NOT have a catch all cuda:: to hip:: namespace conversion, as not all cuda namespaces are converted to hip::, even though c10's are.)

Transpilation inside this folder is controlled by CAFFE2_SPECIFIC_MAPPINGS (oddly enough.) C10_MAPPINGS apply to ALL source files.

If you add a new directory to this folder, you MUST update both c10/cuda/CMakeLists.txt and c10/hip/CMakeLists.txt