This is an implementation of multi-headed attention as described in the
paper "Attention is all you Need"
Vaswani et al., 2017.
If query
, key,
value
are the same, then
this is self-attention. Each timestep in query
attends to the
corresponding sequence in key
, and returns a fixed-width vector.
This layer first projects query
, key
and value
. These are
(effectively) a list of tensors of length num_attention_heads
, where the
corresponding shapes are (batch_size, <query dimensions>, key_dim)
,
(batch_size, <key/value dimensions>, key_dim)
,
(batch_size, <key/value dimensions>, value_dim)
.
Then, the query and key tensors are dot-producted and scaled. These are softmaxed to obtain attention probabilities. The value tensors are then interpolated by these probabilities, then concatenated back to a single tensor.
Finally, the result tensor with the last dimension as value_dim
can take
a linear projection and return.
Usage
layer_multi_head_attention(
inputs,
num_heads,
key_dim,
value_dim = NULL,
dropout = 0,
use_bias = TRUE,
output_shape = NULL,
attention_axes = NULL,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
seed = NULL,
...
)
Arguments
- inputs
see description
- num_heads
Number of attention heads.
- key_dim
Size of each attention head for query and key.
- value_dim
Size of each attention head for value.
- dropout
Dropout probability.
- use_bias
Boolean, whether the dense layers use bias vectors/matrices.
- output_shape
The expected shape of an output tensor, besides the batch and sequence dims. If not specified, projects back to the query feature dim (the query input's last dimension).
- attention_axes
axes over which the attention is applied.
NULL
means attention over all axes, but batch, heads, and features.- kernel_initializer
Initializer for dense layer kernels.
- bias_initializer
Initializer for dense layer biases.
- kernel_regularizer
Regularizer for dense layer kernels.
- bias_regularizer
Regularizer for dense layer biases.
- activity_regularizer
Regularizer for dense layer activity.
- kernel_constraint
Constraint for dense layer kernels.
- bias_constraint
Constraint for dense layer kernels.
- seed
Optional integer to seed the dropout layer.
- ...
For forward/backward compatability.
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.NULL
or missing, then aLayer
instance is returned.
Call Arguments
query
: Query tensor of shape(B, T, dim)
, whereB
is the batch size,T
is the target sequence length, and dim is the feature dimension.value
: Value tensor of shape(B, S, dim)
, whereB
is the batch size,S
is the source sequence length, and dim is the feature dimension.key
: Optional key tensor of shape(B, S, dim)
. If not given, will usevalue
for bothkey
andvalue
, which is the most common case.attention_mask
: a boolean mask of shape(B, T, S)
, that prevents attention to certain positions. The boolean mask specifies which query elements can attend to which key elements, 1 indicates attention and 0 indicates no attention. Broadcasting can happen for the missing batch dimensions and the head dimension.return_attention_scores
: A boolean to indicate whether the output should be(attention_output, attention_scores)
ifTRUE
, orattention_output
ifFALSE
. Defaults toFALSE
.training
: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Will go with either using the training mode of the parent layer/model, orFALSE
(inference) if there is no parent layer.use_causal_mask
: A boolean to indicate whether to apply a causal mask to prevent tokens from attending to future tokens (e.g., used in a decoder Transformer).
Call return
attention_output: The result of the computation, of shape
(B, T, E)
, whereT
is for target sequence shapes andE
is the query input last dimension ifoutput_shape
isNULL
. Otherwise, the multi-head outputs are projected to the shape specified byoutput_shape
.attention_scores: (Optional) multi-head attention coefficients over attention axes.
Properties
A MultiHeadAttention
Layer
instance has the following additional read-only properties:
attention_axes
dropout
key_dense
key_dim
num_heads
output_dense
output_shape
query_dense
use_bias
value_dense
value_dim
See also
Other attention layers: layer_additive_attention()
layer_attention()
layer_group_query_attention()
Other layers: Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()