vllm.model_executor.layers.fla.ops.fused_recurrent ¶
fused_recurrent_gated_delta_rule ¶
fused_recurrent_gated_delta_rule(
q: Tensor,
k: Tensor,
v: Tensor,
g: Tensor,
beta: Tensor = None,
scale: float = None,
initial_state: Tensor = None,
inplace_final_state: bool = True,
cu_seqlens: LongTensor | None = None,
ssm_state_indices: Tensor | None = None,
num_accepted_tokens: Tensor | None = None,
use_qk_l2norm_in_kernel: bool = False,
) -> tuple[Tensor, Tensor]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
q | Tensor | queries of shape | required |
k | Tensor | keys of shape | required |
v | Tensor | values of shape | required |
g | Tensor | g (decays) of shape | required |
beta | Tensor | betas of shape | None |
scale | Optional[int] | Scale factor for the RetNet attention scores. If not provided, it will default to | None |
initial_state | Optional[Tensor] | Initial state of shape | None |
inplace_final_state | bool | bool: Whether to store the final state in-place to save memory. Default: | True |
cu_seqlens | LongTensor | Cumulative sequence lengths of shape | None |
ssm_state_indices | Optional[Tensor] | Indices to map the input sequences to the initial/final states. | None |
num_accepted_tokens | Optional[Tensor] | Number of accepted tokens for each sequence during decoding. | None |
Returns:
| Name | Type | Description |
|---|---|---|
o | Tensor | Outputs of shape |
final_state | Tensor | Final state of shape |
Examples:: >>> import torch >>> import torch.nn.functional as F >>> from einops import rearrange >>> from fla.ops.gated_delta_rule import fused_recurrent_gated_delta_rule # inputs with equal lengths >>> B, T, H, HV, K, V = 4, 2048, 4, 8, 512, 512 >>> q = torch.randn(B, T, H, K, device='cuda') >>> k = F.normalize(torch.randn(B, T, H, K, device='cuda'), p=2, dim=-1) >>> v = torch.randn(B, T, HV, V, device='cuda') >>> g = F.logsigmoid(torch.rand(B, T, HV, device='cuda')) >>> beta = torch.rand(B, T, HV, device='cuda').sigmoid() >>> h0 = torch.randn(B, HV, V, K, device='cuda') >>> o, ht = fused_gated_recurrent_delta_rule( q, k, v, g, beta, initial_state=h0, ) # for variable-length inputs, the batch size B is expected to be 1 and cu_seqlens is required >>> q, k, v, g, beta = map(lambda x: rearrange(x, 'b t ... -> 1 (b t) ...'), (q, k, v, g, beta)) # for a batch with 4 sequences, cu_seqlens with 5 start/end positions are expected >>> cu_seqlens = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long) >>> o_var, ht_var = fused_gated_recurrent_delta_rule( q, k, v, g, beta, initial_state=h0, cu_seqlens=cu_seqlens )
Source code in vllm/model_executor/layers/fla/ops/fused_recurrent.py
515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 | |