Skip to contents

Formula:

for (x in error) {
  if (abs(x) <= delta){
    loss <- c(loss, (0.5 * x^2))
  } else if (abs(x) > delta) {
    loss <- c(loss, (delta * abs(x) - 0.5 * delta^2))
  }
}
loss <- mean(loss)

See: Huber loss.

Usage

loss_huber(
  y_true,
  y_pred,
  delta = 1,
  ...,
  reduction = "sum_over_batch_size",
  name = "huber_loss",
  dtype = NULL
)

Arguments

y_true

tensor of true targets.

y_pred

tensor of predicted targets.

delta

A float, the point where the Huber loss function changes from a quadratic to linear. Defaults to 1.0.

...

For forward/backward compatability.

reduction

Type of reduction to apply to loss. Options are "sum", "sum_over_batch_size" or NULL. Defaults to "sum_over_batch_size".

name

Optional name for the instance.

dtype

The dtype of the loss's computations. Defaults to NULL, which means using config_floatx(). config_floatx() is a "float32" unless set to different value (via config_set_floatx()). If a keras$DTypePolicy is provided, then the compute_dtype will be utilized.

Value

Tensor with one scalar loss entry per sample.

Examples

y_true <- rbind(c(0, 1), c(0, 0))
y_pred <- rbind(c(0.6, 0.4), c(0.4, 0.6))
loss <- loss_huber(y_true, y_pred)