Skip to contents

Formula:

loss <- mean(log(cosh(y_pred - y_true)), axis=-1)

Note that log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that 'logcosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.

Usage

loss_log_cosh(
  y_true,
  y_pred,
  ...,
  reduction = "sum_over_batch_size",
  name = "log_cosh",
  dtype = NULL
)

Arguments

y_true

Ground truth values with shape = [batch_size, d0, .. dN].

y_pred

The predicted values with shape = [batch_size, d0, .. dN].

...

For forward/backward compatability.

reduction

Type of reduction to apply to loss. Options are "sum", "sum_over_batch_size" or NULL. Defaults to "sum_over_batch_size".

name

Optional name for the instance.

dtype

The dtype of the loss's computations. Defaults to NULL, which means using config_floatx(). config_floatx() is a "float32" unless set to different value (via config_set_floatx()). If a keras$DTypePolicy is provided, then the compute_dtype will be utilized.

Value

Logcosh error values with shape = [batch_size, d0, .. dN-1].

Examples

y_true <- rbind(c(0., 1.), c(0., 0.))
y_pred <- rbind(c(1., 1.), c(0., 0.))
loss <- loss_log_cosh(y_true, y_pred)
# 0.108