Formula:
f1_score <- 2 * (precision * recall) / (precision + recall)
This is the harmonic mean of precision and recall.
Its output range is [0, 1]
. It works for both multi-class
and multi-label classification.
Arguments
- ...
For forward/backward compatability.
- average
Type of averaging to be performed on data. Acceptable values are
NULL
,"micro"
,"macro"
and"weighted"
. Defaults toNULL
. IfNULL
, no averaging is performed andresult()
will return the score for each class. If"micro"
, compute metrics globally by counting the total true positives, false negatives and false positives. If"macro"
, compute metrics for each label, and return their unweighted mean. This does not take label imbalance into account. If"weighted"
, compute metrics for each label, and return their average weighted by support (the number of true instances for each label). This alters"macro"
to account for label imbalance. It can result in an score that is not between precision and recall.- threshold
Elements of
y_pred
greater thanthreshold
are converted to be 1, and the rest 0. Ifthreshold
isNULL
, the argmax ofy_pred
is converted to 1, and the rest to 0.- name
Optional. String name of the metric instance.
- dtype
Optional. Data type of the metric result.
Value
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
Examples
See also
Other f score metrics: metric_fbeta_score()
Other metrics: Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_cosine_similarity()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()