ROC curve for REGAL. ROC curves are used to evaluate the ability of a classifier to distinguish between true positives and false positives. With any classifier, increases in sensitivity will inevitably lead to more false positives. When a classifier behaves in an entirely random fashion, then each stepwise increase in sensitivity will lead to a stepwise increase in false positives. This is shown on the plot below as a 45° line. The better the classifier, the higher the curve rises along the left hand side of the plot, indicating stepwise increases in sensitivity with minimal increases in the false positive rate. This is the case for REGAL (line with black boxes). We note that the optimal point for sensitivity versus false positive rate is at a value of 0.82 for sensitivity and 0.09 for the false positive rate (indicated by the dashed line). In other words, the best performance by REGAL on the A. thaliana mitochondrial data set yields a sensitivity of 82% and a false positive rate of just 9%.