pytorch/cmake
PyTorch MergeBot 764eae9c4e Revert "Add Flash Attention support on ROCM (#121561)"
This reverts commit a37e22de70.

Reverted https://github.com/pytorch/pytorch/pull/121561 on behalf of https://github.com/huydhn due to Sorry for reverting your change but this needs more work to be able to land in fbcode because https://github.com/ROCm/aotriton is not available there atm.  We are working to reland this change before 2.3 release ([comment](https://github.com/pytorch/pytorch/pull/121561#issuecomment-2007717091))
2024-03-19 17:14:28 +00:00
..
External Revert "Add Flash Attention support on ROCM (#121561)" 2024-03-19 17:14:28 +00:00
Modules enable mkl_gemm_f16f16f32 in cpublas::gemm (#118367) 2024-01-31 18:37:42 +00:00
Modules_CUDA_fix fix CMake FindCUDA module for cross-compiling (#121590) 2024-03-11 20:09:52 +00:00
public [cuDNN] Cleanup cuDNN < 8.1 ifdefs (#120862) 2024-03-07 01:46:25 +00:00
Allowlist.cmake
BuildVariables.cmake
Caffe2Config.cmake.in [2/4] Intel GPU Runtime Upstreaming for Device (#116833) 2024-01-18 05:02:42 +00:00
CheckAbi.cmake
cmake_uninstall.cmake.in
Codegen.cmake
DebugHelper.cmake
Dependencies.cmake Revert "Add Flash Attention support on ROCM (#121561)" 2024-03-19 17:14:28 +00:00
FlatBuffers.cmake
GoogleTestPatch.cmake
IncludeSource.cpp.in
iOS.cmake
Metal.cmake
MiscCheck.cmake
ProtoBuf.cmake
ProtoBufPatch.cmake
Summary.cmake
TorchConfig.cmake.in
TorchConfigVersion.cmake.in
VulkanCodegen.cmake
VulkanDependencies.cmake