ScoreMod API (#121845)

# Summary

This PR adds a new higher-order_op: `templated_attention`.  This op is designed to extend the functionality of torch.nn.fucntional.scaled_dot_product_attention.  PyTorch has efficient pre-written fused-attention kernels. However, users want to modify how scores are computed (a substep inside attention) -- this traditionally requires the user to write their own attention kernel. One such modification to attention scores that is not currently supported by the top level SDPA op is:[ Attention with Linear Biases (ALiBi](https://arxiv.org/abs/2108.12409)).

This higher-order op will instead accept a callable( 'score_mod') function that is through torch.compile will be used to create an efficient attention kernel instantiation.

### Details

This HOP utilizes the existing fx and HOP infra to capture and convert the User `score-mod` function and convert to an FX graph module. Inductor then consumes this HOP that has a `ir.Subgraph` input. It will inline this lowered subgraph into a triton kernel which performs fused attention with the modification to the scores matrix inlined.

### API

The API for a score_mod function should be as follows:

```Python
def score_mod(score: torch.Tensor, batch: torch.Tensor, head: torch.Tensor, token_1: torch.Tensor, token_kv: torch.Tensor) -> torch.Tensor
```

This function receives five parameters:

- `score`: A scalar tensor representing the attention score, with the same data type and device as the query, key, and value tensors.
- `batch`, `head`, `seq_len_q`, `seq_len_kv`: Scalar tensors indicating the batch index, head index, query index, and key/value index, respectively, with torch.int data type and located on the same device as the score tensor.

Consider inputs query, key, value of shapes (2, 4, 16, 8), leading to an intermediate attention score matrix of shape (2, 4, 16, 16)

The score_mod function will be vectorized over each element of this matrix. For instance, modifying the score at the position corresponding to the 0th batch, 2nd head, between the 8th query and the 9th key element, would be invoked as:

```Python
score_mod(score[0,2,8,9], torch.tensor(0), torch.tensor(2), torch.tensor(8), torch.tensor(9))
```

### Examples
```Python
import torch
from torch.nn.attention.templated_attention import templated_attention

torch.manual_seed(0)

# Lets create some input tensors
# The input tensor has shape (batch_size, num_heads, seq_len, head_dim)
query = torch.randn(8, 8, 2048, 64, device="cuda", dtype=torch.float32)
key = torch.randn(8, 8, 2048, 64, device="cuda", dtype=torch.float32)
value = torch.randn(8, 8, 2048, 64, device="cuda", dtype=torch.float32)

# Lets create a fun new score_modification! I will call this
# Checkerboard. It will reduce the score for neighboring tokens (1 step apart)
# in the sequence. And increase the score for tokens 2 steps apart. For everything
# else, the score will remain the same.

def checkerboard(score, batch, head, token_q, token_kv):
    score = torch.where(torch.abs(token_kv - token_q) == 1, score * 0.5, score)
    score = torch.where(torch.abs(token_kv - token_q) == 2, score * 2.0, score)
    return score

# Lets call templated_attention with this new score modification
output = templated_attention(query, key, value, score_mod=checkerboard)

compiled_templated_attention = torch.compile(templated_attention)
out_compiled = compiled_templated_attention(query, key, value, score_mod=checkerboard)

torch.testing.assert_close(output, out_compiled, atol=2e-2, rtol=2e-2)
```

### Future Work
- This PR is currently only forward only. However the triton kernel for backwards where score_modifications to not rely on external buffers has been explored here: https://github.com/drisspg/transformer_nuggets/blob/main/transformer_nuggets/flash/flash_attention.py
- Kernel Improvements; There are has been some larger updates to the fused attention implementation that Triton uses in its tutorials. The implementation of this kernel is based on a prior version and should be updated.
- We may want to unify this API under the top level SDPA API and leave that as a follow up once this is more stable
- Should we error on CPU?
- There are some issues with dynamic shapes
- Capturing of free variables and lifting to inputs to the subgraph is not working correctly today

### Performance
Comparisons generated by this benchmark:

| Type    |   Speedup |   batch_size |   num_heads |   q_seq_len |   k_seq_len |   head_dim | score_mod     | dtype          |
|---------|-----------|--------------|-------------|-------------|-------------|------------|---------------|----------------|
| Average |     5.412 |              |             |             |             |            |               |                |
| Max     |     8.882 |           16 |          16 |        4096 |        4096 |         64 | relative_bias | torch.bfloat16 |
| Min     |     3.645 |            8 |          16 |         512 |         512 |         64 | causal_mask   | torch.bfloat16 |
| Min     |     0.345 |            1 |          16 |        1024 |        1024 |         64 | pathological  | torch.bfloat16 |

For reference

| Configuration                                 | Forward Time (µ seconds) | Backend          | Speedup |
|-----------------------------------------------|--------------------------|------------------|---------|
| Fastest Config in Sweep (`8 16 4096 4096 64 relative_bias torch.bfloat16`) | 3608                   | Templated Attention                | 1.0  |
| Compiled SDPA (No Mask)                       | 9928                   | Math             | 2.75x   |
| Compiled SDPA (With Mask)                     | 11898                    | Math             | 3.29x   |
| Compiled SDPA (With Mask) | 8704                      | Memory Efficient Attention | 2.42x   |
| Compiled SDPA (No Mask) | 2548                     | FlashAttention2 | 0.706x   |

The speedups are measuring compiled templated attention speed versus different calls to torch.nn.functional.sdpa

<details>

<summary> FULL PERFORMANCE SWEEP NUMBERS </summary>

|   batch_size |   num_heads |   q_seq_len |   k_seq_len |   head_dim | score_mod     | dtype          |   eager_time |   compiled_time |   speedup |
|--------------|-------------|-------------|-------------|------------|---------------|----------------|--------------|-----------------|-----------|
|            1 |          16 |         512 |         512 |         64 | causal_mask   | torch.bfloat16 |      331.444 |          67.221 |     4.931 |
|            1 |          16 |         512 |         512 |         64 | relative_bias | torch.bfloat16 |      335.300 |          64.187 |     5.224 |
|            1 |          16 |         512 |         512 |         64 | head_bias     | torch.bfloat16 |      352.039 |          63.806 |     5.517 |
|            1 |          16 |         512 |         512 |         64 | pathological  | torch.bfloat16 |      371.699 |         711.349 |     0.523 |
|            1 |          16 |        1024 |        1024 |         64 | causal_mask   | torch.bfloat16 |      333.488 |          86.455 |     3.857 |
|            1 |          16 |        1024 |        1024 |         64 | relative_bias | torch.bfloat16 |      322.363 |          82.469 |     3.909 |
|            1 |          16 |        1024 |        1024 |         64 | head_bias     | torch.bfloat16 |      349.967 |          82.233 |     4.256 |
|            1 |          16 |        1024 |        1024 |         64 | pathological  | torch.bfloat16 |      486.359 |        1412.453 |     0.344 |
|            1 |          16 |        4096 |        4096 |         64 | causal_mask   | torch.bfloat16 |     2794.597 |         551.188 |     5.070 |
|            1 |          16 |        4096 |        4096 |         64 | relative_bias | torch.bfloat16 |     3965.150 |         513.101 |     7.728 |
|            1 |          16 |        4096 |        4096 |         64 | head_bias     | torch.bfloat16 |     2408.013 |         504.759 |     4.771 |
|            1 |          16 |        4096 |        4096 |         64 | pathological  | torch.bfloat16 |     6850.531 |       16733.675 |     0.409 |
|            8 |          16 |         512 |         512 |         64 | causal_mask   | torch.bfloat16 |      441.939 |         123.576 |     3.576 |
|            8 |          16 |         512 |         512 |         64 | relative_bias | torch.bfloat16 |      560.379 |         116.710 |     4.801 |
|            8 |          16 |         512 |         512 |         64 | head_bias     | torch.bfloat16 |      421.172 |         115.825 |     3.636 |
|            8 |          16 |         512 |         512 |         64 | pathological  | torch.bfloat16 |      994.492 |        2132.806 |     0.466 |
|            8 |          16 |        1024 |        1024 |         64 | causal_mask   | torch.bfloat16 |     1436.430 |         309.495 |     4.641 |
|            8 |          16 |        1024 |        1024 |         64 | relative_bias | torch.bfloat16 |     1892.216 |         290.186 |     6.521 |
|            8 |          16 |        1024 |        1024 |         64 | head_bias     | torch.bfloat16 |     1360.665 |         282.956 |     4.809 |
|            8 |          16 |        1024 |        1024 |         64 | pathological  | torch.bfloat16 |     3525.532 |        8359.702 |     0.422 |
|            8 |          16 |        4096 |        4096 |         64 | causal_mask   | torch.bfloat16 |    22026.839 |        3864.604 |     5.700 |
|            8 |          16 |        4096 |        4096 |         64 | relative_bias | torch.bfloat16 |    31262.746 |        3609.551 |     8.661 |
|            8 |          16 |        4096 |        4096 |         64 | head_bias     | torch.bfloat16 |    20219.079 |        3480.402 |     5.809 |
|            8 |          16 |        4096 |        4096 |         64 | pathological  | torch.bfloat16 |    54654.647 |      116652.357 |     0.469 |
|           16 |          16 |         512 |         512 |         64 | causal_mask   | torch.bfloat16 |      820.606 |         188.683 |     4.349 |
|           16 |          16 |         512 |         512 |         64 | relative_bias | torch.bfloat16 |     1058.362 |         179.295 |     5.903 |
|           16 |          16 |         512 |         512 |         64 | head_bias     | torch.bfloat16 |      784.372 |         175.714 |     4.464 |
|           16 |          16 |         512 |         512 |         64 | pathological  | torch.bfloat16 |     1890.792 |        4212.877 |     0.449 |
|           16 |          16 |        1024 |        1024 |         64 | causal_mask   | torch.bfloat16 |     2781.830 |         557.017 |     4.994 |
|           16 |          16 |        1024 |        1024 |         64 | relative_bias | torch.bfloat16 |     3694.050 |         525.249 |     7.033 |
|           16 |          16 |        1024 |        1024 |         64 | head_bias     | torch.bfloat16 |     2634.164 |         507.613 |     5.189 |
|           16 |          16 |        1024 |        1024 |         64 | pathological  | torch.bfloat16 |     6959.917 |       15331.116 |     0.454 |
|           16 |          16 |        4096 |        4096 |         64 | causal_mask   | torch.bfloat16 |    43889.096 |        7582.018 |     5.789 |
|           16 |          16 |        4096 |        4096 |         64 | relative_bias | torch.bfloat16 |    62784.293 |        7075.846 |     8.873 |
|           16 |          16 |        4096 |        4096 |         64 | head_bias     | torch.bfloat16 |    40308.606 |        6829.587 |     5.902 |
|           16 |          16 |        4096 |        4096 |         64 | pathological  | torch.bfloat16 |   108892.137 |      233090.953 |     0.467 |
</details>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121845
Approved by: https://github.com/Chillee, https://github.com/zou3519
This commit is contained in:
drisspg 2024-04-06 01:10:40 +00:00 committed by PyTorch MergeBot
parent 8e98fda7a9
commit f4e2a226aa
13 changed files with 1217 additions and 12 deletions

View file

@ -0,0 +1,259 @@
import itertools
from collections import defaultdict
from dataclasses import asdict, dataclass
from functools import partial
from typing import Callable, List
import numpy as np
import torch
import torch.utils.benchmark as benchmark
from tabulate import tabulate
from torch.nn.attention._templated_attention import _compose, _templated_attention
from tqdm import tqdm
torch._dynamo.config.automatic_dynamic_shapes = False
# Needed since changing args to function causes recompiles
torch._dynamo.config.cache_size_limit = 1000
def benchmark_torch_function_in_microseconds(func: Callable, *args, **kwargs) -> float:
# warmup
for _ in range(5):
func(*args, **kwargs)
t0 = benchmark.Timer(
stmt="func(*args, **kwargs)",
globals={"args": args, "kwargs": kwargs, "func": func},
)
return t0.adaptive_autorange(min_run_time=0.1).median * 1e6
@dataclass(frozen=True)
class ExperimentConfig:
batch_size: int
num_heads: int
q_seq_len: int
k_seq_len: int
head_dim: int
score_mod: Callable
dtype: torch.dtype
def asdict(self):
return asdict(self)
@dataclass(frozen=True)
class ExperimentResults:
eager_time: float
compiled_time: float
def get_entries(self) -> List:
return [
f"{self.eager_time:2f}",
f"{self.compiled_time:2f}",
]
@dataclass(frozen=True)
class Experiment:
config: ExperimentConfig
results: ExperimentResults
def get_entries(self) -> List:
return self.config.get_entries() + self.results.get_entries()
def asdict(self):
dict1 = asdict(self.config)
dict2 = asdict(self.results)
return {**dict1, **dict2}
def generate_inputs(
batch_size,
num_heads,
q_sequence_length,
kv_sequence_length,
head_dim,
dtype,
device,
):
q_shape = (batch_size, q_sequence_length, num_heads * head_dim)
kv_shape = (batch_size, kv_sequence_length, num_heads * head_dim)
make_q = partial(torch.rand, q_shape, device=device, dtype=dtype)
make_kv = partial(torch.rand, kv_shape, device=device, dtype=dtype)
query = (
make_q()
.view(batch_size, q_sequence_length, num_heads, head_dim)
.transpose(1, 2)
)
key = (
make_kv()
.view(batch_size, kv_sequence_length, num_heads, head_dim)
.transpose(1, 2)
)
value = (
make_kv()
.view(batch_size, kv_sequence_length, num_heads, head_dim)
.transpose(1, 2)
)
return query, key, value
def run_single_experiment(config: ExperimentConfig) -> ExperimentResults:
device = torch.device("cuda")
query, key, value = generate_inputs(
config.batch_size,
config.num_heads,
config.q_seq_len,
config.k_seq_len,
config.head_dim,
config.dtype,
device,
)
eager_sdpa = _templated_attention
compiled_sdpa = torch.compile(eager_sdpa)
score_mod = config.score_mod
forward_eager_time = benchmark_torch_function_in_microseconds(
eager_sdpa, query, key, value, score_mod
)
forward_compiled_time = benchmark_torch_function_in_microseconds(
compiled_sdpa, query, key, value, score_mod
)
return ExperimentResults(
eager_time=forward_eager_time,
compiled_time=forward_compiled_time,
)
def calculate_speedup(results: ExperimentResults) -> float:
return results.eager_time / results.compiled_time
def get_func_name(func):
return func.__name__.split("<locals>.")[-1].split(" at ")[0]
def get_average_speedups(results: List[Experiment]):
# Calculate speedups
speedups = [calculate_speedup(r.results) for r in results]
# Find indices of max and min speedups
max_speedup_index = np.argmax(speedups)
min_speedup_index = np.argmin(speedups)
# Get the config dictionaries
max_config_dict = results[max_speedup_index].config.asdict()
min_config_dict = results[min_speedup_index].config.asdict()
# Extract function names from score_mod strings
max_config_dict["score_mod"] = (
max_config_dict["score_mod"].__name__.split("<locals>.")[-1].split(" at ")[0]
)
min_config_dict["score_mod"] = (
min_config_dict["score_mod"].__name__.split("<locals>.")[-1].split(" at ")[0]
)
# Create table data
table_data = [
{
"Type": "Average",
"Speedup": np.mean(speedups),
**dict.fromkeys(max_config_dict),
},
{"Type": "Max", "Speedup": speedups[max_speedup_index], **max_config_dict},
{"Type": "Min", "Speedup": speedups[min_speedup_index], **min_config_dict},
]
return table_data
def print_results(results: List[Experiment]):
table_data = defaultdict(list)
for experiment in results:
for key, value in experiment.asdict().items():
if key == "eager_time" or key == "compiled_time":
value = float(value)
table_data[key].append(value)
# Calculate speedups
speedups = [calculate_speedup(r.results) for r in results]
table_data["speedup"] = speedups
table_data["score_mod"] = [get_func_name(func) for func in table_data["score_mod"]]
print(tabulate(table_data, headers="keys", tablefmt="github", floatfmt=".3f"))
average_data = get_average_speedups(results)
print(tabulate(average_data, headers="keys", tablefmt="github", floatfmt=".3f"))
def generate_score_mods() -> List[Callable]:
def causal_mask(score, b, h, token_q, token_kv):
return torch.where(token_q >= token_kv, score, float("-inf"))
def relative_bias(score, b, h, m, n):
return score + (m - n)
def head_bias(score, b, h, m, n):
return score + 2 * h
def pathological(score, b, h, m, n):
def sin(score, b, h, m, n):
return torch.sin(score)
composed_mod = _compose(*(sin for _ in range(10)))
return composed_mod(score, b, h, m, n)
return [causal_mask, relative_bias, head_bias, pathological]
def generate_experiment_configs() -> List[ExperimentConfig]:
batch_sizes = [1, 8, 16]
num_heads = [16]
q_kv_seq_lens = [(512, 512), (1024, 1024), (4096, 4096)]
head_dims = [64]
dtypes = [
torch.bfloat16,
]
score_mods = generate_score_mods()
all_configs = []
for (
bsz,
n_heads,
(q_seq_len, kv_seq_len),
head_dim,
score_mod,
dtype,
) in itertools.product(
batch_sizes, num_heads, q_kv_seq_lens, head_dims, score_mods, dtypes
):
all_configs.append(
ExperimentConfig(
batch_size=bsz,
num_heads=n_heads,
q_seq_len=q_seq_len,
k_seq_len=kv_seq_len,
head_dim=head_dim,
score_mod=score_mod,
dtype=dtype,
)
)
return all_configs
def main():
seed = 123
np.random.seed(seed)
torch.manual_seed(seed)
results = []
for config in tqdm(generate_experiment_configs()):
results.append(Experiment(config, run_single_experiment(config)))
print_results(results)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,172 @@
# Owner(s): ["module: inductor"]
import functools
from collections import namedtuple
from typing import Callable
from unittest import expectedFailure, skipUnless
import torch
from torch._inductor.test_case import TestCase as InductorTestCase
from torch.nn.attention._templated_attention import _compose, _templated_attention
from torch.testing._internal import common_utils
from torch.testing._internal.common_cuda import PLATFORM_SUPPORTS_BF16
from torch.utils._triton import has_triton
# Skip tests if Triton is not available
supported_platform = skipUnless(
torch.cuda.is_available() and has_triton(), "Requires CUDA and Triton"
)
Tolerances = namedtuple("Tolerances", ["atol", "rtol"])
def create_attention(score_mod):
return functools.partial(_templated_attention, score_mod=score_mod)
test_dtypes = (
[torch.float16, torch.bfloat16, torch.float32]
if PLATFORM_SUPPORTS_BF16
else [torch.float16, torch.float32]
)
def _identity_mod(score, b, h, m, n):
return score
class TestTemplatedSDPA(InductorTestCase):
def run_test(self, score_mod: Callable, dtype: torch.dtype = torch.float16):
sdpa_partial = create_attention(score_mod)
compiled_sdpa = torch.compile(sdpa_partial)
q = torch.randn((4, 8, 2048, 64), dtype=dtype, device="cuda")
k = torch.randn((4, 8, 2048, 64), dtype=dtype, device="cuda")
v = torch.randn((4, 8, 2048, 64), dtype=dtype, device="cuda")
ref_out = sdpa_partial(
q.to(torch.float64), k.to(torch.float64), v.to(torch.float64)
)
compiled_out = compiled_sdpa(q, k, v)
tolerance = Tolerances(atol=2e-2, rtol=2e-2)
torch.testing.assert_close(
ref_out.to(dtype=torch.float32),
compiled_out.to(dtype=torch.float32),
atol=tolerance.atol,
rtol=tolerance.rtol,
)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_identity(self, dtype: torch.dtype):
def score_mod(score, b, h, m, n):
return score
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_causal_mask(self, dtype: torch.dtype):
def score_mod(score, b, h, token_q, token_kv):
return torch.where(token_q >= token_kv, score, float("-inf"))
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_rel_bias(self, dtype: torch.dtype):
def score_mod(score, b, h, m, n):
return score + (m - n)
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_alibi_bias(self, dtype: torch.dtype):
def score_mod(score, b, h, m, n):
return score + (m - n) * h
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_rel_causal(self, dtype: torch.dtype):
def score_mod(score, b, h, m, n):
return torch.where(m <= n, score + (m - n), float("-inf"))
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_alibi_causal(self, dtype: torch.dtype):
def score_mod(score, b, h, m, n):
return torch.where(m <= n, score + (m - n) * h, float("-inf"))
self.run_test(score_mod, dtype)
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_function_composition(self, dtype: torch.dtype):
def score_mod_1(score, b, h, m, n):
return score + (m - n)
def score_mod_2(score, b, h, m, n):
return torch.where(m <= n, score, float("-inf"))
composed_score_mod = _compose(score_mod_1, score_mod_2)
self.run_test(composed_score_mod, dtype)
# TODO We are currently not capturing free variables in the closure correctly
@expectedFailure
@supported_platform
@common_utils.parametrize("dtype", test_dtypes)
def test_captured_buffers(self, dtype: torch.dtype):
head_offset = torch.rand(8, device="cuda", dtype=dtype)
def score_mod(score, b, h, m, n):
return score + head_offset[h]
self.run_test(score_mod, dtype)
@supported_platform
def test_backwards_fails(self):
make_tensor = functools.partial(
torch.randn,
(4, 8, 2048, 64),
dtype=torch.float32,
device="cuda",
requires_grad=True,
)
q, k, v = make_tensor(), make_tensor(), make_tensor()
out = _templated_attention(q, k, v, _identity_mod)
with self.assertRaisesRegex(
RuntimeError, "Autograd not implemented for templated_attention"
):
out.backward(torch.ones_like(out))
@supported_platform
def test_mixed_dtypes_fails(self):
query = torch.randn((1, 1, 2048, 64), dtype=torch.float32, device="cuda")
key = torch.randn((1, 1, 2048, 64), dtype=torch.float16, device="cuda")
value = torch.randn((1, 1, 2048, 64), dtype=torch.float16, device="cuda")
with self.assertRaisesRegex(
ValueError, "Expected query, key, and value to have the same dtype"
):
_templated_attention(query, key, value, _identity_mod)
@supported_platform
def test_different_sequence_length_fails(self):
query = torch.randn((1, 1, 2048, 64), dtype=torch.float32, device="cuda")
key = torch.randn((1, 1, 1024, 64), dtype=torch.float32, device="cuda")
value = torch.randn((1, 1, 1024, 64), dtype=torch.float32, device="cuda")
with self.assertRaisesRegex(ValueError, "NYI: The target sequence length"):
_templated_attention(query, key, value, _identity_mod)
common_utils.instantiate_parametrized_tests(TestTemplatedSDPA)
if __name__ == "__main__":
from torch._inductor.test_case import run_tests
run_tests()

View file

@ -12,6 +12,7 @@ import torch.fx
import torch.nn
import torch.onnx.operators
from torch._dynamo.utils import deepcopy_to_fake_tensor, get_fake_value, get_real_value
from torch._dynamo.variables import ConstantVariable
from torch._dynamo.variables.base import VariableTracker
from torch._dynamo.variables.builtin import BuiltinVariable
from torch._dynamo.variables.functions import UserFunctionVariable
@ -136,7 +137,7 @@ def validate_args_and_maybe_create_graph_inputs(
set_subgraph_inputs,
description,
):
from . import AutogradFunctionContextVariable, ConstantVariable, EnumVariable
from . import AutogradFunctionContextVariable, EnumVariable
from .builder import wrap_fx_proxy_cls
assert tracer.parent is not None
@ -512,6 +513,8 @@ class TorchHigherOrderOperatorVariable(VariableTracker):
return OutDtypeHigherOrderVariable(value, source, **kwargs)
elif value.__name__ == "wrap":
return WrapHigherOrderVariable(value, source, **kwargs)
elif value.__name__ == "templated_attention":
return TemplatedAttentionHigherOrderVariable(value, source, **kwargs)
elif value.__name__ in (
"wrap_activation_checkpoint",
"tag_activation_checkpoint",
@ -540,7 +543,6 @@ class CondHigherOrderVariable(TorchHigherOrderOperatorVariable):
self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
) -> "VariableTracker":
from . import (
ConstantVariable,
ListVariable,
NestedUserFunctionVariable,
TensorVariable,
@ -1300,6 +1302,106 @@ class TraceWrappedHigherOrderOperatorVariable(TorchHigherOrderOperatorVariable):
return fn.call_function(tx, args, kwargs)
class TemplatedAttentionHigherOrderVariable(TorchHigherOrderOperatorVariable):
@staticmethod
def normalize_to_args(args, kwargs):
# input signature is (query, key, value, score_mod, *other_buffers)
# Flatten args and kwargs into lists
flat_args = pytree.tree_flatten(args)[0]
flat_kwargs = pytree.tree_flatten(kwargs)[0]
# Combine the flattened lists
all_args = flat_args + flat_kwargs
return all_args
def create_wrapped_node(
self, tx, query: "VariableTracker", score_function: "VariableTracker"
):
from torch._dynamo.symbolic_convert import InstructionTranslator
from .builder import SourcelessBuilder
tx: InstructionTranslator = tx
scores_require_grad: bool = query.requires_grad
score = query.call_method(
tx,
"new_empty",
(SourcelessBuilder.create(tx, []),),
{"requires_grad": SourcelessBuilder.create(tx, scores_require_grad)},
)
def create_scalar():
return query.call_method(
tx, "new_empty", (SourcelessBuilder.create(tx, []),), {}
)
bhmn = [create_scalar() for _ in range(4)]
new_args = [score, *bhmn]
(
(body_output, body_treespec),
body_graph,
body_lifted_freevars,
) = speculate_subgraph(
tx,
score_function,
new_args,
{}, # expect only args no kwargs for now
description="templated_attention",
source_target=self.value,
set_subgraph_inputs="flatten_manual",
)
body_name = add_subgraph(
tx,
"templated_attention",
torch.fx.GraphModule(tx.output.nn_modules, body_graph),
)
body_node = make_attr(tx, body_name)
# It is possible that the score-mod function captures some free variables that are not
# passed in as arguments. In this case, we need to lift them, which is handled by speculate_subgraph.
# We then need to create proxies for this + the inputs.
lifted_args = tuple(arg for arg in body_lifted_freevars.keys())
proxy_args = (body_node,) + lifted_args
example_value = pytree.tree_map_only(
torch.fx.Proxy,
lambda a: a.node.meta["example_value"],
body_output.as_proxy(),
)
return proxy_args, {}, example_value
def call_function(
self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
) -> "VariableTracker":
from .builder import wrap_fx_proxy
query, key, value, score_mod, *other_buffers = self.normalize_to_args(
args, kwargs
)
p_args, p_kwargs, example_value = self.create_wrapped_node(tx, query, score_mod)
proxied_args = [query, key, value, *other_buffers]
# Store the invocation as a call
# Norm_kwargs contains the score_function and we dont want to proxy this because
# Proxying user defined functions is not supported.
inp_args, _ = proxy_args_kwargs(proxied_args, {})
return wrap_fx_proxy(
tx=tx,
proxy=tx.output.create_proxy(
"call_function",
self.value,
args=inp_args + p_args,
kwargs=p_kwargs,
),
example_value=example_value,
)
class AutogradFunctionApplyVariable(VariableTracker):
def __init__(self, fwd_graph, bwd_graph, parent_source, **kwargs):
super().__init__(**kwargs)

View file

@ -1,2 +1,3 @@
from .cond import cond
from .while_loop import while_loop
from .templated_attention import templated_attention

View file

@ -0,0 +1,209 @@
from typing import Callable, Tuple
import torch
import torch.utils._pytree as pytree
from torch._C import DispatchKey
from torch._higher_order_ops.utils import (
_has_potential_branch_input_mutation,
autograd_not_implemented,
UnsupportedAliasMutationException,
)
from torch._ops import HigherOrderOperator
from torch._subclasses import FakeTensorMode
from torch.fx.experimental.proxy_tensor import (
make_fx,
ProxyTorchDispatchMode,
track_tensor_tree,
)
class TemplatedAttentionHOP(HigherOrderOperator):
def __init__(self):
super().__init__("templated_attention")
def __call__(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
if not all(isinstance(buf, torch.Tensor) for buf in other_buffers):
raise RuntimeError("Other buffers must be tensors.")
return super().__call__(query, key, value, score_mod, *other_buffers)
templated_attention = TemplatedAttentionHOP()
templated_attention.__module__ = "torch.ops.higher_order"
def math_attention(
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
"""Eager implementation
This implementation uses vmap to vectorize the score_mod function over the batch, head, m, and n dimensions.
We then apply the vectorized score_mod function to the scores matrix. Each wrap of vmap applies one of the
batch, head, m, or n dimensions. We need to apply vmap 4 times to vectorized over all 4 dimensions.
Args:
query: The query tensor
key: The key tensor
value: The value tensor
score_mod: The score_mod function
other_buffers: Other buffers that are passed to the score_mod function
"""
assert len(other_buffers) == 0, "Other buffers are not yet supported."
scores = query @ key.transpose(-2, -1)
b = torch.arange(0, scores.size(0), device=scores.device)
h = torch.arange(0, scores.size(1), device=scores.device)
m = torch.arange(0, scores.size(2), device=scores.device)
n = torch.arange(0, scores.size(3), device=scores.device)
in_dim_buffers = (None,) * len(other_buffers)
score_mod = torch.vmap(score_mod, in_dims=(0, None, None, None, 0) + in_dim_buffers)
score_mod = torch.vmap(score_mod, in_dims=(0, None, None, 0, None) + in_dim_buffers)
score_mod = torch.vmap(score_mod, in_dims=(0, None, 0, None, None) + in_dim_buffers)
score_mod = torch.vmap(score_mod, in_dims=(0, 0, None, None, None) + in_dim_buffers)
scores = score_mod(scores, b, h, m, n, *other_buffers)
scores = scores.softmax(dim=-1)
return scores @ value
@templated_attention.py_impl(DispatchKey.CompositeExplicitAutograd)
def sdpa_dense(
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
return math_attention(query, key, value, score_mod, *other_buffers).contiguous()
# TODO We need to implement an autograd function for this, there is some complexity to do this generically
templated_attention.py_impl(DispatchKey.Autograd)(
autograd_not_implemented(templated_attention, deferred_error=True)
)
def trace_templated_attention(
proxy_mode: ProxyTorchDispatchMode,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
"""Traces the templated_attention operator with the given score_mod function and other_buffers.
Trace SDPA will call make_fx with "fake" example vals and then trace the score_mod function
This will produce a GraphModule that will be stored on the root tracer as "sdpa_score". We
access this graph module in inductor to inline the score_mod function to the triton template.
"""
example_out = templated_attention(query, key, value, score_mod, *other_buffers)
example_vals = [
torch.zeros((), dtype=query.dtype, requires_grad=query.requires_grad)
] + [torch.zeros((), dtype=torch.int) for _ in range(4)]
score_graph = make_fx(score_mod)(*example_vals, *other_buffers)
proxy_mode.tracer.root.register_module("sdpa_score", score_graph)
node_args = (query, key, value, score_graph, *other_buffers)
proxy_args = pytree.tree_map(proxy_mode.tracer.unwrap_proxy, node_args)
out_proxy = proxy_mode.tracer.create_proxy(
"call_function", templated_attention, proxy_args, {}, name="templated_attention"
)
return track_tensor_tree(
example_out, out_proxy, constant=None, tracer=proxy_mode.tracer
)
@templated_attention.py_impl(ProxyTorchDispatchMode)
def templated_attention_proxy_torch_dispatch_mode(
mode: ProxyTorchDispatchMode,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
assert mode is not None, "Mode should always be enabled for python fallback key"
if mode.enable_tracing:
return trace_templated_attention(
mode, query, key, value, score_mod, *other_buffers
)
else:
return templated_attention(query, key, value, score_mod, *other_buffers)
@templated_attention.py_functionalize_impl
def templated_attention_functionalize(
ctx: torch._subclasses.functional_tensor.BaseFunctionalizeAPI,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: torch.Tensor,
):
"""Defines the functionalization rules for the templated_attention operator.
Write now we are unwrapping each tensor and then redispatching to the next, however we want to
guard against any mutations in the score_mod function, to the other_buffers since those
are free variables.
"""
query_unwrapped = ctx.unwrap_tensors(query)
key_unwrapped = ctx.unwrap_tensors(key)
value_unwrapped = ctx.unwrap_tensors(value)
other_buffers_unwrapped = ctx.unwrap_tensors(other_buffers)
# Appease the mypy overlords
assert isinstance(query_unwrapped, torch.Tensor)
assert isinstance(key_unwrapped, torch.Tensor)
assert isinstance(value_unwrapped, torch.Tensor)
assert isinstance(other_buffers_unwrapped, tuple)
assert all(isinstance(item, torch.Tensor) for item in other_buffers_unwrapped)
example_vals = [torch.zeros((), dtype=query.dtype)] + [
torch.zeros((), dtype=torch.int) for _ in range(4)
]
with ctx.redispatch_to_next() as m:
functional_score_mod = ctx.functionalize(score_mod)
pre_dispatch = hasattr(ctx, "mode") and ctx.mode.pre_dispatch
mutates = _has_potential_branch_input_mutation(
functional_score_mod, example_vals, pre_dispatch
)
# The only care about mutations of existing buffers since we can't replay these.
# However, we can just error if anything is detected
if mutates:
raise UnsupportedAliasMutationException("Mutations detected in score_mod")
out = templated_attention(
query_unwrapped,
key_unwrapped,
value_unwrapped,
functional_score_mod,
*other_buffers_unwrapped,
)
return ctx.wrap_tensors(out)
@templated_attention.py_impl(FakeTensorMode)
def templated_attention_fake_tensor_mode(
mode: FakeTensorMode,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: Callable,
*other_buffers: Tuple[torch.Tensor, ...],
) -> torch.Tensor:
with mode:
return torch.empty_like(query, memory_format=torch.contiguous_format)

View file

@ -1691,10 +1691,20 @@ class KernelTemplate:
Children classes: TritonTemplate, CUDATemplate
"""
@staticmethod
def indent_except_first(source: str, num_indents: int, indents_spacing=4):
lines = source.splitlines(True)
if len(lines) > 1:
lines[1:] = [
(" " * indents_spacing * num_indents) + line for line in lines[1:]
]
return "".join(lines)
@staticmethod
def _template_from_string(source):
env = jinja2_env()
if env is not None:
env.filters["indent_except_first"] = KernelTemplate.indent_except_first
return env.from_string(source)
return None

View file

@ -0,0 +1,161 @@
""" Triton Implementation of the Templated SDPA Kernel"""
import logging
import torch
from ..select_algorithm import TritonTemplate
log = logging.getLogger(__name__)
aten = torch.ops.aten
def sdpa_grid(batch_size, num_heads, num_queries, d_model, meta):
"""How is this kernel parallelized?
We create a grid of (batch_size * num_heads, ceil_div(n_queries, query_block_size), 1)
Each block is responsible for iterating over blocks of keys and values calculating
the final attention output.
"""
import triton
return (triton.cdiv(num_queries, meta["BLOCK_M"]), batch_size * num_heads, 1)
sdpa_template = TritonTemplate(
name="sdpa",
grid=sdpa_grid,
source=r"""
{{def_kernel("Q", "K", "V")}}
# Sub notation for this kernel:
# Q: Query, K: Key, V: Value
# M: Number of queries, N: Number of keys/values, D: Model dimension
# z: Batch size, h: Number of heads, m: Number of queries per head, k: Number of keys per head
# Define Q Strides
stride_qz = {{stride("Q", 0)}}
stride_qh = {{stride("Q", 1)}}
stride_qm = {{stride("Q", 2)}}
stride_qk = {{stride("Q", 3)}}
# Define K Strides
stride_kz = {{stride("K", 0)}}
stride_kh = {{stride("K", 1)}}
stride_kn = {{stride("K", 2)}}
stride_kk = {{stride("K", 3)}}
# Define V Strides
stride_vz = {{stride("V", 0)}}
stride_vh = {{stride("V", 1)}}
stride_vk = {{stride("V", 2)}}
stride_vn = {{stride("V", 3)}}
Z = {{size("Q", 0)}}
H = {{size("Q", 1)}}
N_CTX = {{size("Q", 2)}}
# TODO I think we should do some performance work
# to find the optimal calls for perf/accuracy to tl.dot
qk_scale = 1.0
MATMUL_PRECISION = tl.float16
start_m = tl.program_id(0)
off_hz = tl.program_id(1)
qkv_offset = off_hz * stride_qh
Q_block_ptr = tl.make_block_ptr(
base=Q + qkv_offset,
shape=(N_CTX, BLOCK_DMODEL),
strides=(stride_qm, stride_qk),
offsets=(start_m * BLOCK_M, 0),
block_shape=(BLOCK_M, BLOCK_DMODEL),
order=(1, 0)
)
K_block_ptr = tl.make_block_ptr(
base=K + qkv_offset,
shape=(BLOCK_DMODEL, N_CTX),
strides=(stride_kk, stride_kn),
offsets=(0, 0),
block_shape=(BLOCK_DMODEL, BLOCK_N),
order=(0, 1)
)
V_block_ptr = tl.make_block_ptr(
base=V + qkv_offset,
shape=(N_CTX, BLOCK_DMODEL),
strides=(stride_vk, stride_vn),
offsets=(0, 0),
block_shape=(BLOCK_N, BLOCK_DMODEL),
order=(1, 0)
)
# initialize offsets
offs_m = start_m * BLOCK_M + tl.arange(0, BLOCK_M)
offs_n = tl.arange(0, BLOCK_N)
# initialize pointer to m and l
m_i = tl.zeros([BLOCK_M], dtype=tl.float32) - float("inf")
l_i = tl.zeros([BLOCK_M], dtype=tl.float32)
acc = tl.zeros([BLOCK_M, BLOCK_DMODEL], dtype=tl.float32)
# scale sm_scale by log_2(e) and use
# 2^x instead of exp in the loop because CSE and LICM
# don't work as expected with `exp` in the loop
# TODO fix me
# qk_scale = sm_scale * 1.44269504
q = tl.load(Q_block_ptr)
q = (q * qk_scale).to(MATMUL_PRECISION)
# loop over k, v and update accumulator
lo = 0
hi = N_CTX
for start_n in range(lo, hi, BLOCK_N):
start_n = tl.multiple_of(start_n, BLOCK_N)
# -- load k, v --
k = tl.load(K_block_ptr)
v = tl.load(V_block_ptr)
# -- compute qk ---
qk = tl.zeros([BLOCK_M, BLOCK_N], dtype=tl.float32)
qk += tl.dot(q, k.to(MATMUL_PRECISION))
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
{{ modification(
score="qk",
b="off_hz // H",
h="off_hz % H",
m="offs_m[:, None]",
n="start_n + offs_n[None, :]",
out="qk"
) | indent_except_first(2) }}
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# -- compute scaling constant ---
row_max = tl.max(qk, 1)
m_i_new = tl.maximum(m_i, row_max)
masked_out_rows = (m_i_new == float("-inf"))
# TODO FIX ME and use 2^x instead of exp
# alpha = tl.math.exp2(m_i - m_i_new)
# p = tl.math.exp2(qk - m_i_new[:, None])
alpha = tl.math.exp(m_i - m_i_new)
alpha = tl.where(masked_out_rows, 0, alpha)
p = tl.math.exp(qk - m_i_new[:, None])
p = tl.where(masked_out_rows[:, None], 0, p)
# -- scale and update acc --
acc_scale = l_i * 0 + alpha # workaround some compiler bug
acc *= acc_scale[:, None]
acc += tl.dot(p.to(MATMUL_PRECISION), v.to(MATMUL_PRECISION))
# -- update m_i and l_i --
l_i = l_i * alpha + tl.sum(p, 1)
m_i = m_i_new
# update pointers
K_block_ptr = tl.advance(K_block_ptr, (0, BLOCK_N))
V_block_ptr = tl.advance(V_block_ptr, (BLOCK_N, 0))
# write back l and m
acc = acc / l_i[:, None]
# TODO For backward support we need to add the Logsumexp
# l_ptrs = L + off_hz * N_CTX + offs_m
# tl.store(l_ptrs, m_i + tl.math.log2(l_i))
idx_z = tl.program_id(1) // H
idx_h = tl.program_id(1) % H
idx_m = offs_m[:, None]
idx_d = tl.arange(0, BLOCK_DMODEL)[None, :]
# TODO generalize and add proper mask support
mask = (idx_m != -1) & (idx_d != -1)
{{store_output(("idx_z", "idx_h", "idx_m", "idx_d"), "acc")}}
""",
)

View file

@ -2407,7 +2407,6 @@ make_fallback(aten._to_sparse)
# Needs dimname support
make_fallback(aten.zeros.names)
# 6) Pattern-matched
make_fallback(
aten._scaled_dot_product_efficient_attention.default,
@ -6000,6 +5999,117 @@ def while_loop(cond_fn, body_fn, carried_inputs, additional_inputs):
return list(map(TensorBox.create, result))
@register_lowering(torch.ops.higher_order.templated_attention)
def templated_attention(*args, **kwargs):
from torch._prims_common import make_contiguous_strides_for
from .ir import (
ComputedBuffer,
FixedLayout,
FlexibleLayout,
InputBuffer,
StorageBox,
TensorBox,
)
query, key, value, subgraph = args
def create_placeholder(name: str, dtype: torch.dtype) -> InputBuffer:
return TensorBox.create(
InputBuffer(
name,
FixedLayout(
query.get_device(),
dtype,
[
1,
],
[
1,
],
),
)
)
scalar_inps = ["score", "b", "h", "m", "n"]
env = {}
cnt = 0
placeholder_inps = [
create_placeholder(name, dtype)
for name, dtype in [
("score", query.get_dtype()),
("b", torch.int64),
("h", torch.int64),
("m", torch.int64),
("n", torch.int64),
]
]
for node in subgraph.graph_module.graph.nodes:
# There are two classes of placeholder inpts that we need
# to handle differently. For the first n_scalar_inps inputs
# we expect that these placeholders were generated by the make_fx call
# in the templated Attention HOP. So we need to create a new placeholder
# TensorBox for each of these inputs. For the rest of the inputs we
# expect that these are lifted inputs that fill up the '*other_buffers'
# tuple and already have corresponding TensorBoxes passed in as args.
if node.op == "placeholder":
is_lifted_input = cnt >= len(scalar_inps)
env[node] = args[cnt - 1] if is_lifted_input else placeholder_inps[cnt]
cnt += 1
elif node.op == "call_function":
# For call_function we use the defulat lowerings and pass in the
# already created TensorBoxes as args
from torch.utils._pytree import tree_map
env[node] = lowerings[node.target](
*tree_map(lambda x: env[x] if x in env else x, node.args)
)
elif node.op == "output":
# For the output node we need to create a ComputedBuffer
# which represents the actual score modification
output_buffer = env[node.args[0]]
assert isinstance(output_buffer.data, StorageBox), (
"The output node for the templated attention subgraph must be a StorageBox, but got: ",
type(output_buffer),
)
# Create the ComputedBuffere directly that will be inlined into the modfication block
subgraph_buffer = ComputedBuffer(
name=None,
layout=FlexibleLayout(
device=output_buffer.data.get_device(),
dtype=output_buffer.data.get_dtype(),
size=output_buffer.data.get_size(),
),
data=output_buffer.data.data, # type: ignore[arg-type]
)
from .kernel.templated_attention import sdpa_template
layout = FixedLayout(
output_buffer.get_device(),
query.get_dtype(),
query.get_size(),
make_contiguous_strides_for(query.get_size()),
)
choices: List[Any] = []
from .select_algorithm import autotune_select_algorithm
sdpa_template.maybe_append_choice(
choices=choices,
input_nodes=(query, key, value),
layout=layout,
subgraphs=subgraph_buffer,
num_stages=2,
num_warps=4,
BLOCK_M=64,
BLOCK_N=128,
BLOCK_DMODEL=query.get_size()[-1],
)
return autotune_select_algorithm(
"sdpa", choices, [query, key, value], layout
)
raise ValueError("TemplatedAttention was passed a subgraph with no output node!")
try:
import torch.distributed._functional_collectives

View file

@ -10,7 +10,7 @@ import time
from concurrent.futures import ThreadPoolExecutor
from io import StringIO
from typing import Any, Callable, Dict, List, Optional, Union
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from unittest.mock import patch
import sympy
@ -98,6 +98,7 @@ class TritonTemplateKernel(TritonKernel):
prefix_args=0,
suffix_args=0,
epilogue_fn=identity,
subgraphs=None,
*,
index_dtype,
):
@ -124,6 +125,8 @@ class TritonTemplateKernel(TritonKernel):
self.epilogue_fn = epilogue_fn
self.render_hooks = dict()
self.triton_meta: Optional[Dict[str, object]] = None
# For Templated Attention
self.subgraphs = subgraphs
def need_numel_args(self):
return False
@ -264,7 +267,53 @@ class TritonTemplateKernel(TritonKernel):
val = self.named_input_nodes[name].get_stride()[index]
return texpr(self.rename_indexing(val))
def store_output(self, indices, val, mask):
def modification(self, **fixed_inputs) -> str:
"""This function generates the code body to populate
a 'modification' placeholder within a template
TODO come up with standardized way to modify templates, with
potential multiple modifications
"""
class PlaceholderSubstitution(V.WrapperHandler): # type: ignore[name-defined]
self.name = "PlaceholderSubstitution"
def load(self, name: str, index: sympy.Expr):
if name not in fixed_inputs:
raise AssertionError(
f"All loads should be coming from fixed inputs - {name}"
)
return f"({fixed_inputs[name]})"
# TODO Doesn't work yet
def indirect_indexing(self, index_var, size, check):
return self._inner.indirect_indexing(index_var, size, False)
# return sympy_symbol(str(index_var))
# if self.modification_cache is None:
with V.set_ops_handler(PlaceholderSubstitution(V.ops)):
assert isinstance(
self.subgraphs, ir.ComputedBuffer
), "Expected the subgraph to be a ComputedBuffer"
if isinstance(self.subgraphs.data, ir.InputBuffer):
out = self.subgraphs.data.make_loader()((1,))
else:
out = self.subgraphs.data.inner_fn((1,))
self.codegen_body()
self.body.writeline(f"{fixed_inputs['out']} = {out.value}")
body_val = self.body.getvalue()
self.body.clear()
self.cse.invalidate(set())
return body_val
def store_output(
self,
indices: Union[List[Any], Tuple[Any]],
val: str,
mask: Optional[str] = None,
):
"""
Hook called from template code to store the final output
(if the buffer hasn't been optimized away), then append any
@ -272,7 +321,7 @@ class TritonTemplateKernel(TritonKernel):
"""
assert isinstance(indices, (list, tuple))
assert isinstance(val, str)
assert isinstance(mask, str)
assert isinstance(mask, (str, type(None)))
assert self.template_mask is None
indices = list(map(TritonPrinter.paren, indices))
index_symbols = [sympy.Symbol(x) for x in indices]
@ -357,6 +406,7 @@ class TritonTemplateKernel(TritonKernel):
self.stride,
self.store_output,
self.make_load,
self.modification,
]
}
@ -466,6 +516,7 @@ class TritonTemplate(KernelTemplate):
prefix_args=0,
suffix_args=0,
epilogue_fn=identity,
subgraphs=None,
**kwargs,
):
assert self.template, "requires jinja2"
@ -496,6 +547,7 @@ class TritonTemplate(KernelTemplate):
suffix_args=suffix_args,
epilogue_fn=epilogue_fn,
index_dtype="tl.int32",
subgraphs=subgraphs,
)
with patch.object(
V.graph, "get_dtype", self._fake_get_dtype(fake_out)

View file

@ -513,7 +513,7 @@ class BaseFunctionalizeAPI(ABC):
@abstractmethod
def unwrap_tensors(
self, args: Union[torch.Tensor, Tuple[torch.Tensor, ...]]
) -> Tuple[Any]:
) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]:
pass
@abstractmethod
@ -557,7 +557,7 @@ class PythonFunctionalizeAPI(BaseFunctionalizeAPI):
def unwrap_tensors(
self, args: Union[torch.Tensor, Tuple[torch.Tensor, ...]]
) -> Tuple[Any]:
) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]:
return torch.utils._pytree.tree_map_only(
FunctionalTensor, FunctionalTensor.from_functional, args
)
@ -599,7 +599,7 @@ class CppFunctionalizeAPI(BaseFunctionalizeAPI):
def unwrap_tensors(
self, args: Union[torch.Tensor, Tuple[torch.Tensor, ...]]
) -> Tuple[Any]:
) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]:
from torch._functorch.eager_transforms import (
_unwrap_all_tensors_from_functional,
)
@ -638,7 +638,7 @@ class FunctorchFunctionalizeAPI(BaseFunctionalizeAPI):
def unwrap_tensors(
self, args: Union[torch.Tensor, Tuple[torch.Tensor, ...]]
) -> Tuple[Any]:
) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]:
from torch._functorch.eager_transforms import (
_unwrap_all_tensors_from_functional,
)

View file

@ -0,0 +1,89 @@
"""This module implements the user facing API for templated attention in PyTorch."""
import functools
from typing import Callable
import torch
from torch._higher_order_ops.templated_attention import (
templated_attention as templated_attention_hop,
)
from torch.nn.attention._utils import _validate_sdpa_input
def _compose(*fs):
"""Compose a sequence of score_mod functions."""
def compose2(f, g):
def inner(score, b, h, m, n):
return f(g(score, b, h, m, n), b, h, m, n)
return inner
return functools.reduce(compose2, fs)
_score_mod_signature = Callable[
[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor
]
def _templated_attention(
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
score_mod: _score_mod_signature,
) -> torch.Tensor:
r"""This function implements scaled dot product attention with an arbitrary attention score modification function.
This function computes the scaled dot product attention between query, key, and value tensors with a user-defined
attention score modification function. The attention score modification function will be applied after the attention
scores have been calculated between the query and key tensors. The attention scores are calculated as follows:
The ``score_mod`` function should have the following signature:
.. code-block:: python
def score_mod(
score: torch.Tensor,
batch: torch.Tensor,
head: torch.Tensor,
token_q: torch.Tensor,
token_kv: torch.Tensor
) -> torch.Tensor:
Where:
- ``score``: A scalar tensor representing the attention score,
with the same data type and device as the query, key, and value tensors.
- ``batch``, ``head``, ``token_q``, ``token_kv``: Scalar tensors indicating
the batch index, head index, query index, and key/value index, respectively.
These should have the ``torch.int`` data type and be located on the same device as the score tensor.
Args:
query (Tensor): Query tensor; shape :math:`(B, H, L, E)`.
key (Tensor): Key tensor; shape :math:`(B, H, S, E)`.
value (Tensor): Value tensor; shape :math:`(B, H, S, Ev)`.
score_mod (Callable): Function to modify attention scores
Returns:
output (Tensor): Attention output; shape :math:`(B, H, L, Ev)`.
Shape legend:
- :math:`N: \text{Batch size} ... : \text{Any number of other batch dimensions (optional)}`
- :math:`S: \text{Source sequence length}`
- :math:`L: \text{Target sequence length}`
- :math:`E: \text{Embedding dimension of the query and key}`
- :math:`Ev: \text{Embedding dimension of the value}`
.. warning::
`torch.nn.attention.templated_attention` is a prototype feature in PyTorch. It doesn't support training currently.
Please look forward to a more stable implementation in a future version of PyTorch.
Read more about feature classification at: https://pytorch.org/blog/pytorch-feature-classification-changes/#prototype
"""
# Some basic input validation
_validate_sdpa_input(query, key, value)
# This will restriction will be removed in newer version of the kernel
if query.size(-2) != key.size(-2):
raise ValueError(
"NYI: The target sequence length (L) of the query tensor must match the source sequence length (S) of the key tensor."
)
return templated_attention_hop(query, key, value, score_mod)

View file

@ -58,6 +58,8 @@ PLATFORM_SUPPORTS_FUSED_ATTENTION: bool = LazyVal(lambda: PLATFORM_SUPPORTS_FLAS
PLATFORM_SUPPORTS_FUSED_SDPA: bool = TEST_CUDA and not TEST_WITH_ROCM
PLATFORM_SUPPORTS_BF16: bool = LazyVal(lambda: TEST_CUDA and SM80OrLater)
if TEST_NUMBA:
try:
import numba.cuda

View file

@ -3,12 +3,15 @@
import torch
import functools
from torch.testing import make_tensor
import unittest
from functorch.experimental.control_flow import map
from torch.testing._internal.opinfo.core import (
OpInfo,
SampleInput,
)
from torch.testing._internal.common_dtype import all_types_and
from torch.testing._internal.common_dtype import all_types_and, custom_types
from torch.testing._internal.opinfo.core import DecorateInfo
from torch.nn.attention._templated_attention import _templated_attention
def sample_inputs_map(opinfo, device, dtype, requires_grad, **kwargs):
make_arg = functools.partial(
@ -105,6 +108,23 @@ def sample_inputs_auto_functionalize(opinfo, device, dtype, requires_grad, **kwa
def simple_auto_functionalize(x, z):
return torch.ops.testlib.mutating_custom_op(x, z)
def sample_inputs_templated_attention(opinfo, device, dtype, reuires_grad, **kwargs):
make_arg = functools.partial(
make_tensor, device=device, dtype=dtype, requires_grad=reuires_grad
)
def score_mod(score, b, h, m, n):
return score + h
yield SampleInput(
make_arg(2, 2, 64, 8, low=0.1, high=2),
make_arg(2, 2, 64, 8, low=0.1, high=2),
make_arg(2, 2, 64, 8, low=0.1, high=2),
score_mod,
)
hop_db = [
OpInfo(
name="map",
@ -167,5 +187,23 @@ hop_db = [
check_batched_forward_grad=False,
check_inplace_batched_forward_grad=False,
supports_autograd=False,
)
),
OpInfo(
name="templated_attention",
variant_test_name="simple",
op=_templated_attention,
sample_inputs_func=sample_inputs_templated_attention,
dtypes=custom_types(torch.float16, torch.float32),
supports_out=False,
check_batched_grad=False,
check_batched_gradgrad=False,
check_batched_forward_grad=False,
check_inplace_batched_forward_grad=False,
skips=(
DecorateInfo(unittest.expectedFailure, "TestHOP", "test_aot_export"),
DecorateInfo(unittest.expectedFailure, "TestHOP", "test_pre_dispatch_export"),
DecorateInfo(unittest.expectedFailure, "TestHOP", "test_serialize_export"),
DecorateInfo(unittest.expectedFailure, "TestHOP", "test_retrace_export"),
)
),
]