Precision, recall, and F1 score for binary classification
Precision, recall, and F1 score are three commonly used evaluation metrics in machine learning, particularly in binary classification problems.
Precision:
Precision is a measure of how accurately the model classifies positive examples. Specifically, it measures the proportion of true positive predictions out of all positive predictions. In other words, precision measures how many of the examples predicted to be positive by the model are actually positive.
Precision is calculated as follows:
Precision = True positives / (True positives + False positives)
Recall:
Recall is a measure of how accurately the model classifies positive examples, taking into account all actual positive examples. Specifically, it measures the proportion of true positive predictions out of all actual positive cases. In other words, recall measures how many of the positive examples in the data were correctly identified by the model.
Recall is calculated as follows:
Recall = True positives / (True positives + False negatives)
F1 Score:
The F1 score is a measure of the overall performance of the model, taking into account both precision and recall. It is the harmonic mean of precision and recall, and it provides a balanced measure of the model's performance. The F1 score ranges from 0 to 1, with 1 indicating perfect precision and recall, and 0 indicating poor performance.
The F1 score is calculated as follows:
F1 Score = 2 (Precision *Recall) / (Precision + Recall)
If you change the form of the formula, you can make it like this.
F1 Score = 2/((1/Precision) + (1/Recall))
This reflects the meaning of F1 score. If precision or recall is really low, F1 score becomes really low.
Overall, F1 Score is useful and easy to calculate. Also, this is used to choose epsilon of anomaly detection.