onnxruntime/cmake/onnxruntime_compile_triton_kernel.cmake
Tianlei Wu 9f0fae29e8
[CUDA] Add SparseAttention operator for Phi-3-small (#20216)
### Description
Add CUDA implementation for block sparse attention for Phi-3-small.

Block sparse attention was proposed in [Sparse
Transformers](https://arxiv.org/pdf/1904.10509) by OpenAI, and also
adopted in [BigBird](https://arxiv.org/pdf/2007.14062) with different
sparse layout.

In Phi-3-small, the sparse layout is static, and works with
unidirectional (causal) attention.

Compared to dense attention, the benefit of block sparse is to speed up
both training and inference. It could save memory thus support longer
context length.

- [x] Add operator spec and shape inference
- [x] Symbolic shape inference
- [x] Refactor GroupQueryAttention to expose common kernels for kv cache
concatenation, q/k/v transpose etc.
- [x] Add cuda kernel to convert block mask to CSR format
- [x] Add cuda kernel to generate position ids
- [x] Add compile script and template files to convert triton kernel to
cubin and dispatcher.
- [x] Add triton kernel v1 for prompt
- [x] Add triton kernel v2 for token generation and support padding
- [x] Update IO Binding Helper to allow buffer sharing.
- [x] Test relevance
- [x] Test performance

### Performance
Test in A100-SXM4-80GB with `batch_size=4, num_heads=32,
max_seq_len=8192, head_size=128, sparse_block_size=64, local_blocks=16,
vert_stride=8, num_layout=8`

We compare sparse attention to corresponding GQA with local attention
windows size 1024, or GQA with dense causal.

Average latency in milliseconds (for fused attention kernel used in
prompt prefilling):

seq_len | GQA-Dense | GQA-Local | SparseAttention
-- | -- | -- | --
64 | 0.0465 | 0.0722 | 0.0641
128 | 0.0618 | 0.0787 | 0.0672
256 | 0.1086 | 0.1076 | 0.0943
512 | 0.2535 | 0.2487 | 0.1676
1024 | 0.7042 | 0.7050 | 0.3800
2048 | 2.4125 | 1.9316 | 0.8966
4096 | 8.9346 | 4.5699 | 2.1129
8192 | 40.5401 | 10.3508 | 5.1748

Average latency in milliseconds (for fused attention kernel used in
token generation:

past_seq_len | GQA-Dense | GQA-Local | SparseAttention
-- | -- | -- | --
64 | 0.0186 | 0.0186 | 0.0870
128 | 0.0408 | 0.0466 | 0.1165
256 | 0.0530  | 0.0592 | 0.0988
512 | 0.0445| 0.0447 | 0.1150
1024 | 0.0634  | 0.0640 | 0.1454
2048 | 0.1027 | 0.0637 | 0.1589
4096 | 0.1789 | 0.0631 | 0.1806
8192 | 0.3288 | 0.0655 | 0.2146

We can see that the kernel for token generation still have room to
improve.

#### Limitations
Only support right-side padding and unidirectional attention.

The following are not supported in the first version:
(1) Packed mode like PackedMultiHeadAttention where input has been
removed padding.
(2) paged attention.
(3) bidirectional attention.
(4) GPU compute capacity that is not 8.0, 8.6 and 8.9.
(5) Left side padding.

Some of these limitations will be removed in the future (may be in a new
operator).
2024-04-30 09:06:29 -07:00

35 lines
1.4 KiB
CMake

# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
find_package(Python3 COMPONENTS Interpreter REQUIRED)
# set all triton kernel ops that need to be compiled
if(onnxruntime_USE_ROCM)
set(triton_kernel_scripts
"onnxruntime/core/providers/rocm/math/softmax_triton.py"
"onnxruntime/contrib_ops/rocm/diffusion/group_norm_triton.py"
)
endif()
function(compile_triton_kernel out_triton_kernel_obj_file out_triton_kernel_header_dir)
# compile triton kernel, generate .a and .h files
set(triton_kernel_compiler "${REPO_ROOT}/tools/ci_build/compile_triton.py")
set(out_dir "${CMAKE_CURRENT_BINARY_DIR}/triton_kernels")
set(out_obj_file "${out_dir}/triton_kernel_infos.a")
set(header_file "${out_dir}/triton_kernel_infos.h")
list(TRANSFORM triton_kernel_scripts PREPEND "${REPO_ROOT}/")
add_custom_command(
OUTPUT ${out_obj_file} ${header_file}
COMMAND Python3::Interpreter ${triton_kernel_compiler}
--header ${header_file}
--script_files ${triton_kernel_scripts}
--obj_file ${out_obj_file}
DEPENDS ${triton_kernel_scripts} ${triton_kernel_compiler}
COMMENT "Triton compile generates: ${out_obj_file}"
)
add_custom_target(onnxruntime_triton_kernel DEPENDS ${out_obj_file} ${header_file})
set(${out_triton_kernel_obj_file} ${out_obj_file} PARENT_SCOPE)
set(${out_triton_kernel_header_dir} ${out_dir} PARENT_SCOPE)
endfunction()