This is an implementation of grouped-query attention introduced by
Ainslie et al., 2023. Here
num_key_value_heads
denotes number of groups, setting
num_key_value_heads
to 1 is equivalent to multi-query attention, and
when num_key_value_heads
is equal to num_query_heads
it is equivalent
to multi-head attention.
This layer first projects query
, key
, and value
tensors. Then, key
and value
are repeated to match the number of heads of query
.
Then, the query
is scaled and dot-producted with key
tensors. These are
softmaxed to obtain attention probabilities. The value tensors are then
interpolated by these probabilities and concatenated back to a single
tensor.
Usage
layer_group_query_attention(
object,
head_dim,
num_query_heads,
num_key_value_heads,
dropout = 0,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
Arguments
- object
Object to compose the layer with. A tensor, array, or sequential model.
- head_dim
Size of each attention head.
- num_query_heads
Number of query attention heads.
- num_key_value_heads
Number of key and value attention heads.
- dropout
Dropout probability.
- use_bias
Boolean, whether the dense layers use bias vectors/matrices.
- kernel_initializer
Initializer for dense layer kernels.
- bias_initializer
Initializer for dense layer biases.
- kernel_regularizer
Regularizer for dense layer kernels.
- bias_regularizer
Regularizer for dense layer biases.
- activity_regularizer
Regularizer for dense layer activity.
- kernel_constraint
Constraint for dense layer kernels.
- bias_constraint
Constraint for dense layer kernels.
- ...
For forward/backward compatability.
Value
attention_output: Result of the computation, of shape
(batch_dim, target_seq_len, feature_dim)
, where target_seq_len
is for target sequence length and feature_dim
is the query input
last dim.
attention_scores: (Optional) attention coefficients of shape
(batch_dim, num_query_heads, target_seq_len, source_seq_len)
.
Call Arguments
query
: Query tensor of shape(batch_dim, target_seq_len, feature_dim)
, wherebatch_dim
is batch size,target_seq_len
is the length of target sequence, andfeature_dim
is dimension of feature.value
: Value tensor of shape(batch_dim, source_seq_len, feature_dim)
, wherebatch_dim
is batch size,source_seq_len
is the length of source sequence, andfeature_dim
is dimension of feature.key
: Optional key tensor of shape(batch_dim, source_seq_len, feature_dim)
. If not given, will usevalue
for bothkey
andvalue
, which is most common case.attention_mask
: A boolean mask of shape(batch_dim, target_seq_len, source_seq_len)
, that prevents attention to certain positions. The boolean mask specifies which query elements can attend to which key elements, where 1 indicates attention and 0 indicates no attention. Broadcasting can happen for the missing batch dimensions and the head dimension.return_attention_scores
: A boolean to indicate whether the output should be(attention_output, attention_scores)
ifTRUE
, orattention_output
ifFALSE
. Defaults toFALSE
.training
: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Will go with either using the training mode of the parent layer/model orFALSE
(inference) if there is no parent layer.use_causal_mask
: A boolean to indicate whether to apply a causal mask to prevent tokens from attending to future tokens (e.g., used in a decoder Transformer).
See also
Other attention layers: layer_additive_attention()
layer_attention()
layer_multi_head_attention()
Other layers: Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()