Skip to main content

Table 3 The summary of all performance evaluation measure metrics of Multiclass classification models

From: Prediction of diabetes disease using an ensemble of machine learning multi-classifier models

Measures

Definitions

Formula

Average accuracy

The classifier's average per-class effectiveness

\(\frac{{\sum }_{i=1}^{k}\frac{{tp}_{i}+{tn}_{i}}{{tp}_{i}+{tn}_{i}+{fp}_{i}+{fn}_{i}}}{k}\)(8)

Micro-averaging

Precision

The genuine class labels' average per-class agreement with the classifier's labels

\(\frac{\sum_{i=1}^{k}{tp}_{i}}{\sum_{i=1}^{k}{tp}_{i}+{fp}_{i}}\)(9)

Recall

A classifier's average per-class efficacy in identifying class labels

\(\frac{\sum_{i=1}^{k}{tp}_{i}}{\sum_{i=1}^{k}\left({tp}_{i}+{fn}_{i}\right)}\)(10)

F1-score

The macro-average precision and recall's harmonic mean

\(\frac{2\times Precision\times Recall}{Precision+Recall}\)(11)

ROC (AUC)

Receiver operating characteristics (ROC) with the area under the ROC curve (AUC also measured the ranking of predictions rather than their absolute values)