If sample_weight
is given, calculates the sum of the weights of
false negatives. This metric creates one local variable, accumulator
that is used to keep track of the number of false negatives.
If sample_weight
is NULL
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
- ...
For forward/backward compatability.
- thresholds
(Optional) Defaults to
0.5
. A float value, or a Python list of float threshold values in[0, 1]
. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold isTRUE
, below isFALSE
). If used with a loss function that setsfrom_logits=TRUE
(i.e. no sigmoid applied to predictions),thresholds
should be set to 0. One metric value is generated for each threshold value.- name
(Optional) string name of the metric instance.
- dtype
(Optional) data type of the metric result.
Value
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
Usage
Standalone usage:
m <- metric_false_negatives()
m$update_state(c(0, 1, 1, 1), c(0, 1, 0, 0))
m$result()
m$reset_state()
m$update_state(c(0, 1, 1, 1), c(0, 1, 0, 0), sample_weight=c(0, 0, 1, 0))
m$result()
# 1.0
See also
Other confusion metrics: metric_auc()
metric_false_positives()
metric_precision()
metric_precision_at_recall()
metric_recall()
metric_recall_at_precision()
metric_sensitivity_at_specificity()
metric_specificity_at_sensitivity()
metric_true_negatives()
metric_true_positives()
Other metrics: Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_cosine_similarity()
metric_f1_score()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()