Skip to main content

Table 2 Results from implemented models when tested on multiple benchmark datasets

From: Supervised promoter recognition: a benchmark framework

Model hg38 dataset
4,602,408 non-promoter sequences
Sn Sp PPV MCC
CNNProm 0.584 0.943 0.014 0.0009
ICNNP 0.474 0.944 0.012 0.0007
DProm 0.931 0.025 0.001 − 0.0001
Model hg38chr1 dataset hg38chr2 dataset
192,476 non-promoter sequences 224,056 non-promoter sequences
Sn Sp PPV MCC Sn Sp PPV MCC
ICNNP 0.470 0.890 0.077 0.154 0.487 0.908 0.058 0.143
DProm 0.891 0.024 0.017 − 0.073 0.887 0.022 0.011 − 0.066
Model mm10chr1 dataset mm10chr2 dataset
191,094 non-promoter sequences 193,183 non-promoter sequences
Sn Sp PPV MCC Sn Sp PPV MCC
ICNNP 0.342 0.904 0.058 0.106 0.369 0.895 0.070 0.120
DProm 0.867 0.046 0.15 − 0.053 0.856 0.050 0.019 − 0.060
  1. Models are clearly lacking in precision (PPV)