Computes how often integer targets are in the top K predictions.
Source: R/metrics.R
metric_sparse_top_k_categorical_accuracy.RdComputes how often integer targets are in the top K predictions.
By default, the arguments expected by update_state() are:
y_true: a tensor of shape(batch_size)representing indices of true categories.y_pred: a tensor of shape(batch_size, num_categories)containing the scores for each sample for all possible categories.
With from_sorted_ids=TRUE, the arguments expected by update_state are:
y_true: a tensor of shape(batch_size)representing indices or IDs of true categories.y_pred: a tensor of shape(batch_size, N)containing the indices or IDs of the topNcategories sorted in order from highest score to lowest score.Nmust be greater or equal tok.
The from_sorted_ids=TRUE option can be more efficient when the set of
categories is very large and the model has an optimized way to retrieve the
top ones either without scoring or without maintaining the scores for all
the possible categories.
Usage
metric_sparse_top_k_categorical_accuracy(
y_true,
y_pred,
k = 5L,
...,
name = "sparse_top_k_categorical_accuracy",
dtype = NULL,
from_sorted_ids = FALSE
)Arguments
- y_true
Tensor of true targets.
- y_pred
Tensor of predicted targets.
- k
(Optional) Number of top elements to look at for computing accuracy. Defaults to
5.- ...
For forward/backward compatability.
- name
(Optional) string name of the metric instance.
- dtype
(Optional) data type of the metric result.
- from_sorted_ids
(Optional) When
FALSE, the default, the tensor passed iny_predcontains the unsorted scores of all possible categories. WhenTRUE,y_predcontains the indices or IDs for the top categories.
Value
If y_true and y_pred are missing, a Metric
instance is returned. The Metric instance that can be passed directly to
compile(metrics = ), or used as a standalone object. See ?Metric for
example usage. If y_true and y_pred are provided, then a tensor with
the computed value is returned.
Usage
Standalone usage:
m <- metric_sparse_top_k_categorical_accuracy(k = 1L)
m$update_state(
rbind(2, 1),
op_array(rbind(c(0.1, 0.9, 0.8), c(0.05, 0.95, 0)), dtype = "float32")
)
m$result()m$reset_state()
m$update_state(
rbind(2, 1),
op_array(rbind(c(0.1, 0.9, 0.8), c(0.05, 0.95, 0)), dtype = "float32"),
sample_weight = c(0.7, 0.3)
)
m$result()m <- metric_sparse_top_k_categorical_accuracy(k = 1, from_sorted_ids = TRUE)
m$update_state(array(c(2, 1)), rbind(c(1, 0, 3),
c(1, 2, 3)))
m$result()Usage with compile() API:
model %>% compile(optimizer = 'sgd',
loss = 'sparse_categorical_crossentropy',
metrics = list(metric_sparse_top_k_categorical_accuracy()))See also
Other accuracy metrics: metric_binary_accuracy() metric_categorical_accuracy() metric_sparse_categorical_accuracy() metric_top_k_categorical_accuracy()
Other metrics: Metric() custom_metric() metric_auc() metric_binary_accuracy() metric_binary_crossentropy() metric_binary_focal_crossentropy() metric_binary_iou() metric_categorical_accuracy() metric_categorical_crossentropy() metric_categorical_focal_crossentropy() metric_categorical_hinge() metric_concordance_correlation() metric_cosine_similarity() metric_f1_score() metric_false_negatives() metric_false_positives() metric_fbeta_score() metric_hinge() metric_huber() metric_iou() metric_kl_divergence() metric_log_cosh() metric_log_cosh_error() metric_mean() metric_mean_absolute_error() metric_mean_absolute_percentage_error() metric_mean_iou() metric_mean_squared_error() metric_mean_squared_logarithmic_error() metric_mean_wrapper() metric_one_hot_iou() metric_one_hot_mean_iou() metric_pearson_correlation() metric_poisson() metric_precision() metric_precision_at_recall() metric_r2_score() metric_recall() metric_recall_at_precision() metric_root_mean_squared_error() metric_sensitivity_at_specificity() metric_sparse_categorical_accuracy() metric_sparse_categorical_crossentropy() metric_specificity_at_sensitivity() metric_squared_hinge() metric_sum() metric_top_k_categorical_accuracy() metric_true_negatives() metric_true_positives()