# An optimizer that dynamically scales the loss to prevent underflow.

Source:`R/optimizers.R`

`optimizer_loss_scale.Rd`

Loss scaling is a technique to prevent numeric underflow in intermediate gradients when float16 is used. To prevent underflow, the loss is multiplied (or "scaled") by a certain factor called the "loss scale", which causes intermediate gradients to be scaled by the loss scale as well. The final gradients are divided (or "unscaled") by the loss scale to bring them back to their original value.

`LossScaleOptimizer`

wraps another optimizer and applies dynamic loss
scaling to it. This loss scale is dynamically updated over time as follows:

On any train step, if a nonfinite gradient is encountered, the loss scale is halved, and the train step is skipped.

If

`dynamic_growth_steps`

have ocurred since the last time the loss scale was updated, and no nonfinite gradients have occurred, the loss scale is doubled.

## Usage

```
optimizer_loss_scale(
inner_optimizer,
initial_scale = 32768,
dynamic_growth_steps = 2000L,
...,
name = NULL,
weight_decay = NULL,
clipnorm = NULL,
clipvalue = NULL,
global_clipnorm = NULL,
use_ema = NULL,
ema_momentum = NULL,
ema_overwrite_frequency = NULL,
loss_scale_factor = NULL,
gradient_accumulation_steps = NULL
)
```

## Arguments

- inner_optimizer
The keras

`Optimizer`

instance to wrap.- initial_scale
Float. The initial loss scale. This scale will be updated during training. It is recommended for this to be a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised.

- dynamic_growth_steps
Int. How often to update the scale upwards. After every

`dynamic_growth_steps`

steps with finite gradients, the loss scale is doubled.- ...
For forward/backward compatability.

- name
String. The name to use for momentum accumulator weights created by the optimizer.

- weight_decay
Float. If set, weight decay is applied.

- clipnorm
Float. If set, the gradient of each weight is individually clipped so that its norm is no higher than this value.

- clipvalue
Float. If set, the gradient of each weight is clipped to be no higher than this value.

- global_clipnorm
Float. If set, the gradient of all weights is clipped so that their global norm is no higher than this value.

- use_ema
Boolean, defaults to

`FALSE`

. If`TRUE`

, exponential moving average (EMA) is applied. EMA consists of computing an exponential moving average of the weights of the model (as the weight values change after each training batch), and periodically overwriting the weights with their moving average.- ema_momentum
Float, defaults to 0.99. Only used if

`use_ema=TRUE`

. This is the momentum to use when computing the EMA of the model's weights:`new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value`

.- ema_overwrite_frequency
Int or

`NULL`

, defaults to`NULL`

. Only used if`use_ema=TRUE`

. Every`ema_overwrite_frequency`

steps of iterations, we overwrite the model variable by its moving average. If`NULL`

, the optimizer does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling`optimizer$finalize_variable_values()`

(which updates the model variables in-place). When using the built-in`fit()`

training loop, this happens automatically after the last epoch, and you don't need to do anything.- loss_scale_factor
Float or

`NULL`

. If a float, the scale factor will be multiplied the loss before computing gradients, and the inverse of the scale factor will be multiplied by the gradients before updating variables. Useful for preventing underflow during mixed precision training. Alternately,`optimizer_loss_scale()`

will automatically set a loss scale factor.- gradient_accumulation_steps
Int or

`NULL`

. If an int, model & optimizer variables will not be updated at every step; instead they will be updated every`gradient_accumulation_steps`

steps, using the average value of the gradients since the last update. This is known as "gradient accumulation". This can be useful when your batch size is very small, in order to reduce gradient noise at each update step.

## See also

Other optimizers: `optimizer_adadelta()`

`optimizer_adafactor()`

`optimizer_adagrad()`

`optimizer_adam()`

`optimizer_adam_w()`

`optimizer_adamax()`

`optimizer_ftrl()`

`optimizer_lion()`

`optimizer_nadam()`

`optimizer_rmsprop()`

`optimizer_sgd()`