Skip to contents

This layer distorts input images by applying elastic deformations, simulating a physically realistic transformation. The magnitude of the distortion is controlled by the scale parameter, while the factor determines the probability of applying the transformation.

Usage

layer_random_elastic_transform(
  object,
  factor = 1,
  scale = 1,
  interpolation = "bilinear",
  fill_mode = "reflect",
  fill_value = 0,
  value_range = list(0L, 255L),
  seed = NULL,
  data_format = NULL,
  ...
)

Arguments

object

Object to compose the layer with. A tensor, array, or sequential model.

factor

A single float or a tuple of two floats. factor controls the probability of applying the transformation.

  • factor = 0.0 ensures no transformation is applied.

  • factor = 1.0 means the transformation is always applied.

  • If a tuple (min, max) is provided, a probability value is sampled between min and max for each image.

  • If a single float is provided, a probability is sampled between 0.0 and the given float. Default is 1.0.

scale

A float or a tuple of two floats defining the magnitude of the distortion applied.

  • If a tuple (min, max) is provided, a random scale value is sampled within this range.

  • If a single float is provided, a random scale value is sampled between 0.0 and the given float. Default is 1.0.

interpolation

Interpolation mode. Supported values: "nearest", "bilinear".

fill_mode

Points outside the boundaries of the input are filled according to the given mode. Available methods are "constant", "nearest", "wrap" and "reflect". Defaults to "reflect".

  • "reflect": (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel.

  • "constant": (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value k specified by fill_value.

  • "wrap": (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge.

  • "nearest": (a a a a | a b c d | d d d d) The input is extended by the nearest pixel. When using the torch backend, "reflect" is redirected to "mirror" because torch does not support "reflect". The torch backend also does not support "wrap".

fill_value

A float representing the value to fill outside the boundaries when fill_mode = "constant".

value_range

The range of values the incoming images will have. Represented as a two-number tuple written [low, high]. This is typically either [0, 1] or [0, 255] depending on how your preprocessing pipeline is set up.

seed

Integer. Used to create a random seed.

data_format

string, either "channels_last" or "channels_first". The ordering of the dimensions in the inputs. "channels_last" corresponds to inputs with shape (batch, height, width, channels) while "channels_first" corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".

...

For forward/backward compatability.

See also

Other image preprocessing layers:
layer_aug_mix()
layer_auto_contrast()
layer_center_crop()
layer_cut_mix()
layer_equalization()
layer_max_num_bounding_boxes()
layer_mix_up()
layer_rand_augment()
layer_random_color_degeneration()
layer_random_color_jitter()
layer_random_erasing()
layer_random_gaussian_blur()
layer_random_grayscale()
layer_random_hue()
layer_random_invert()
layer_random_perspective()
layer_random_posterization()
layer_random_saturation()
layer_random_sharpness()
layer_random_shear()
layer_rescaling()
layer_resizing()
layer_solarization()

Other preprocessing layers:
layer_aug_mix()
layer_auto_contrast()
layer_category_encoding()
layer_center_crop()
layer_cut_mix()
layer_discretization()
layer_equalization()
layer_feature_space()
layer_hashed_crossing()
layer_hashing()
layer_integer_lookup()
layer_max_num_bounding_boxes()
layer_mel_spectrogram()
layer_mix_up()
layer_normalization()
layer_rand_augment()
layer_random_brightness()
layer_random_color_degeneration()
layer_random_color_jitter()
layer_random_contrast()
layer_random_crop()
layer_random_erasing()
layer_random_flip()
layer_random_gaussian_blur()
layer_random_grayscale()
layer_random_hue()
layer_random_invert()
layer_random_perspective()
layer_random_posterization()
layer_random_rotation()
layer_random_saturation()
layer_random_sharpness()
layer_random_shear()
layer_random_translation()
layer_random_zoom()
layer_rescaling()
layer_resizing()
layer_solarization()
layer_stft_spectrogram()
layer_string_lookup()
layer_text_vectorization()

Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_aug_mix()
layer_auto_contrast()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_cut_mix()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_equalization()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_num_bounding_boxes()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_mix_up()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_rand_augment()
layer_random_brightness()
layer_random_color_degeneration()
layer_random_color_jitter()
layer_random_contrast()
layer_random_crop()
layer_random_erasing()
layer_random_flip()
layer_random_gaussian_blur()
layer_random_grayscale()
layer_random_hue()
layer_random_invert()
layer_random_perspective()
layer_random_posterization()
layer_random_rotation()
layer_random_saturation()
layer_random_sharpness()
layer_random_shear()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rms_normalization()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_solarization()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_stft_spectrogram()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()