A preprocessing layer which randomly adjusts brightness during training.
Source:R/layers-preprocessing.R
layer_random_brightness.Rd
This layer will randomly increase/reduce the brightness for the input RGB
images. At inference time, the output will be identical to the input.
Call the layer with training=TRUE
to adjust the brightness of the input.
Note: This layer is safe to use inside a tf.data
pipeline
(independently of which backend you're using).
Usage
layer_random_brightness(
object,
factor,
value_range = list(0L, 255L),
seed = NULL,
...
)
Arguments
- object
Object to compose the layer with. A tensor, array, or sequential model.
- factor
Float or a list of 2 floats between -1.0 and 1.0. The factor is used to determine the lower bound and upper bound of the brightness adjustment. A float value will be chosen randomly between the limits. When -1.0 is chosen, the output image will be black, and when 1.0 is chosen, the image will be fully white. When only one float is provided, eg, 0.2, then -0.2 will be used for lower bound and 0.2 will be used for upper bound.
- value_range
Optional list of 2 floats for the lower and upper limit of the values of the input data. To make no change, use
c(0.0, 1.0)
, e.g., if the image input has been scaled before this layer. Defaults toc(0.0, 255.0)
. The brightness adjustment will be scaled to this range, and the output values will be clipped to this range.- seed
optional integer, for fixed RNG behavior.
- ...
For forward/backward compatability.
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.NULL
or missing, then aLayer
instance is returned.
Inputs
3D (HWC) or 4D (NHWC) tensor, with float or int dtype. Input pixel
values can be of any range (e.g. [0., 1.)
or [0, 255]
)
Output
3D (HWC) or 4D (NHWC) tensor with brightness adjusted based on the
factor
. By default, the layer will output floats.
The output value will be clipped to the range [0, 255]
,
the valid range of RGB colors, and
rescaled based on the value_range
if needed.
Example
random_bright <- layer_random_brightness(factor=0.2, seed = 1)
# An image with shape [2, 2, 3]
image <- array(1:12, dim=c(2, 2, 3))
# Assume we randomly select the factor to be 0.1, then it will apply
# 0.1 * 255 to all the channel
output <- random_bright(image, training=TRUE)
output
See also
Other image augmentation layers: layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
Other preprocessing layers: layer_category_encoding()
layer_center_crop()
layer_discretization()
layer_feature_space()
layer_hashed_crossing()
layer_hashing()
layer_integer_lookup()
layer_mel_spectrogram()
layer_normalization()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_rescaling()
layer_resizing()
layer_string_lookup()
layer_text_vectorization()
Other layers: Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()