Formula:
b2 <- beta^2
f_beta_score <- (1 + b2) * (precision * recall) / (precision * b2 + recall)This is the weighted harmonic mean of precision and recall.
Its output range is [0, 1]. It works for both multi-class
and multi-label classification.
Usage
metric_fbeta_score(
...,
average = NULL,
beta = 1,
threshold = NULL,
name = "fbeta_score",
dtype = NULL
)Arguments
- ...
For forward/backward compatability.
- average
Type of averaging to be performed across per-class results in the multi-class case. Acceptable values are
NULL,"micro","macro"and"weighted". Defaults toNULL. IfNULL, no averaging is performed andresult()will return the score for each class. If"micro", compute metrics globally by counting the total true positives, false negatives and false positives. If"macro", compute metrics for each label, and return their unweighted mean. This does not take label imbalance into account. If"weighted", compute metrics for each label, and return their average weighted by support (the number of true instances for each label). This alters"macro"to account for label imbalance. It can result in an score that is not between precision and recall.- beta
Determines the weight of given to recall in the harmonic mean between precision and recall (see pseudocode equation above). Defaults to
1.- threshold
Elements of
y_predgreater thanthresholdare converted to be 1, and the rest 0. IfthresholdisNULL, the argmax ofy_predis converted to 1, and the rest to 0.- name
Optional. String name of the metric instance.
- dtype
Optional. Data type of the metric result.
Value
a Metric instance is returned. The Metric instance can be passed
directly to compile(metrics = ), or used as a standalone object. See
?Metric for example usage.
Examples
See also
Other f score metrics: metric_f1_score()
Other metrics: Metric() custom_metric() metric_auc() metric_binary_accuracy() metric_binary_crossentropy() metric_binary_focal_crossentropy() metric_binary_iou() metric_categorical_accuracy() metric_categorical_crossentropy() metric_categorical_focal_crossentropy() metric_categorical_hinge() metric_concordance_correlation() metric_cosine_similarity() metric_f1_score() metric_false_negatives() metric_false_positives() metric_hinge() metric_huber() metric_iou() metric_kl_divergence() metric_log_cosh() metric_log_cosh_error() metric_mean() metric_mean_absolute_error() metric_mean_absolute_percentage_error() metric_mean_iou() metric_mean_squared_error() metric_mean_squared_logarithmic_error() metric_mean_wrapper() metric_one_hot_iou() metric_one_hot_mean_iou() metric_pearson_correlation() metric_poisson() metric_precision() metric_precision_at_recall() metric_r2_score() metric_recall() metric_recall_at_precision() metric_root_mean_squared_error() metric_sensitivity_at_specificity() metric_sparse_categorical_accuracy() metric_sparse_categorical_crossentropy() metric_sparse_top_k_categorical_accuracy() metric_specificity_at_sensitivity() metric_squared_hinge() metric_sum() metric_top_k_categorical_accuracy() metric_true_negatives() metric_true_positives()