Skip to contents

Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided in a one_hot representation. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. There should be num_classes floating point values per feature, i.e., the shape of both y_pred and y_true are [batch_size, num_classes].

Usage

loss_categorical_crossentropy(
  y_true,
  y_pred,
  from_logits = FALSE,
  label_smoothing = 0,
  axis = -1L,
  ...,
  reduction = "sum_over_batch_size",
  name = "categorical_crossentropy",
  dtype = NULL
)

Arguments

y_true

Tensor of one-hot true targets.

y_pred

Tensor of predicted targets.

from_logits

Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution.

label_smoothing

Float in [0, 1]. When > 0, label values are smoothed, meaning the confidence on label values are relaxed. For example, if 0.1, use 0.1 / num_classes for non-target labels and 0.9 + 0.1 / num_classes for target labels.

axis

The axis along which to compute crossentropy (the features axis). Defaults to -1.

...

For forward/backward compatability.

reduction

Type of reduction to apply to the loss. In almost all cases this should be "sum_over_batch_size". Supported options are "sum", "sum_over_batch_size" or NULL.

name

Optional name for the loss instance.

dtype

The dtype of the loss's computations. Defaults to NULL, which means using config_floatx(). config_floatx() is a "float32" unless set to different value (via config_set_floatx()). If a keras$DTypePolicy is provided, then the compute_dtype will be utilized.

Value

Categorical crossentropy loss value.

Examples

y_true <- rbind(c(0, 1, 0), c(0, 0, 1))
y_pred <- rbind(c(0.05, 0.95, 0), c(0.1, 0.8, 0.1))
loss <- loss_categorical_crossentropy(y_true, y_pred)
loss

## tf.Tensor([0.05129329 2.30258509], shape=(2), dtype=float64)

Standalone usage:

y_true <- rbind(c(0, 1, 0), c(0, 0, 1))
y_pred <- rbind(c(0.05, 0.95, 0), c(0.1, 0.8, 0.1))
# Using 'auto'/'sum_over_batch_size' reduction type.
cce <- loss_categorical_crossentropy()
cce(y_true, y_pred)

## tf.Tensor(1.1769392, shape=(), dtype=float32)

# Calling with 'sample_weight'.
cce(y_true, y_pred, sample_weight = op_array(c(0.3, 0.7)))

## tf.Tensor(0.8135988, shape=(), dtype=float32)

# Using 'sum' reduction type.
cce <- loss_categorical_crossentropy(reduction = "sum")
cce(y_true, y_pred)

## tf.Tensor(2.3538785, shape=(), dtype=float32)

# Using 'none' reduction type.
cce <- loss_categorical_crossentropy(reduction = NULL)
cce(y_true, y_pred)

## tf.Tensor([0.05129331 2.3025851 ], shape=(2), dtype=float32)

Usage with the compile() API:

model %>% compile(optimizer = 'sgd',
              loss=loss_categorical_crossentropy())