In a recent project I was wondering why I get the exact same value for precision, recall and the F1 score when using scikit-learn’s metrics. The project is about a simple classification problem where the input is mapped to exactly \(1\) of \(n\) classes. I was using micro averaging for the metric functions, which means the following according to sklearn’s documentation:
Calculate metrics globally by counting the total true positives, false negatives and false positives.
According to the documentation this behaviour is correct:
Note that for “micro”-averaging in a multiclass setting with all labels included will produce equal precision, recall and F, while “weighted” averaging may produce an F-score that is not between precision and recall.
After thinking about it a bit I figured out why this is the case. In this article, I will explain the reasons.