bidirectional()
is an alias for layer_bidirectional()
.
See ?
layer_bidirectional()
for the full documentation.
Usage
bidirectional(
object,
layer,
merge_mode = "concat",
weights = NULL,
backward_layer = NULL,
...
)
Arguments
- object
Object to compose the layer with. A tensor, array, or sequential model.
- layer
RNN
instance, such aslayer_lstm()
orlayer_gru()
. It could also be aLayer()
instance that meets the following criteria:Be a sequence-processing layer (accepts 3D+ inputs).
Have a
go_backwards
,return_sequences
andreturn_state
attribute (with the same semantics as for theRNN
class).Have an
input_spec
attribute.Implement serialization via
get_config()
andfrom_config()
. Note that the recommended way to create new RNN layers is to write a custom RNN cell and use it withlayer_rnn()
, instead of subclassing withLayer()
directly. Whenreturn_sequences
isTRUE
, the output of the masked timestep will be zero regardless of the layer's originalzero_output_for_mask
value.
- merge_mode
Mode by which outputs of the forward and backward RNNs will be combined. One of
{"sum", "mul", "concat", "ave", NULL}
. IfNULL
, the outputs will not be combined, they will be returned as a list. Defaults to"concat"
.- weights
see description
- backward_layer
Optional
RNN
, orLayer()
instance to be used to handle backwards input processing. Ifbackward_layer
is not provided, the layer instance passed as thelayer
argument will be used to generate the backward layer automatically. Note that the providedbackward_layer
layer should have properties matching those of thelayer
argument, in particular it should have the same values forstateful
,return_states
,return_sequences
, etc. In addition,backward_layer
andlayer
should have differentgo_backwards
argument values. AValueError
will be raised if these requirements are not met.- ...
For forward/backward compatability.
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.NULL
or missing, then aLayer
instance is returned.