Skip to main content

Table 1 Comparison of model selection methods

From: Learning mixed graphical models with separate sparsity parameters and stability-based model selection

Methods Precision Recall F1-score Matthews CC Accuracy
AIC 0.1104 (0.002) 0.9698 (0.004) 0.1982 (0.003) 0.2882 (0.003) 0.7952 (0.003)
BIC 0.4588 (0.028) 0.8633 (0.007) 0.5890 (0.025) 0.6098 (0.022) 0.9652 (0.004)
CV 0.1530 (0.003) 0.9694 (0.004) 0.2640 (0.005) 0.3539 (0.004) 0.8587 (0.003)
Oracle 0.9149 (0.015) 0.7868 (0.021) 0.8397 (0.009) 0.8416 (0.008) 0.9923 (0.000)
StARS – 1 λ 0.8988 (0.018) 0.4993 (0.010) 0.6408 (0.011) 0.6632 (0.011) 0.9854 (0.001)
StEPS – 3 λ 0.9159 (0.014) 0.6720 (0.009) 0.7731 (0.007) 0.7787 (0.007) 0.9897 (0.000)
  1. AIC akaike information criterion, BIC Bayesian information criterion, CV cross-validation, Oracle: best possible prediction performance (maximize accuracy using true graph)
  2. Mean (and standard error) of classification performance over 20 datasets simulated from scale-free networks