From: Popularity and performance of bioinformatics software: the case of gene set analysis
References | Scope | Size | Criteria | Best performing methods |
---|---|---|---|---|
Naeem et al. [21] | ORA and FCS methods | 14 methods | Method’s AUC (evaluated by predicting targets of TFs and miRNAs) | ANOVA, Z-SCORE, and Wilcoxon’s rank sum (WRS) |
Tarca et al. [22] | ORA, FCS, and SS methods | 16 methods | Prioritization, Sensitivity, and FPR | GLOBALTEST and PLAGE (sensitivity), PADOG and ORA (prioritization), and CAMERA (FPR). Author’s general recommendation: PLAGE, GLOBALTEST, and PADOG |
Bayerlova et al. [23] | ORA, FCS, and PT methods | 7 methods | Sensitivity and prioritization (for benchmark), and Sensitivity, specificity, and accuracy (for simulations of pathway overlap) | For benchmark: CePaGSA (sensitivity) and PathNet (prioritization). For simulation -original pathways: CePAGSA (sensitivity), WRS (specificity), and WRS (accuracy). For simulation -non-overlapping pathways: KS (sensitivity), and SPIA, CePaORA, CePaGSA, and PathNet (specificity and accuracy) |
Jaakkola et al. [24] | ORA, FCS, and PT methods | 5 methods | Consistency of significant pathways between datasets, and Sensitivity | SPIA and CePaORA (consistency), and SPIA, CePaORA, and NetGSA (sensitivity). Author’s general recommendation: SPIA |
De Meyer et al. [25] | ORA, FCS, and NI methods | 4 methods | Prioritization, Sensitivity, and Specificity | PADOG (specificity) and BinoX (sensitivity) |
Lim et al. [26] | SS/Pathway-activity methods | 13 methods | Classification performance, preservation of data structure, robustness to noise, and reproducibility between pathway databases | ESEA, Pathifier, SAS, and PADOG (classification tasks), Pathifier and PLAGE (data structure), ssGSEA (robustness), and individPath, Pathifier, and SAS (reproducibility). Author’s general recommendation: Pathifier, SAS, and individPath |
Nguyen et al. [27] | ORA, FCS, and PT methods | 13 methods | In order of importance: Number of biased pathways, Prioritization, Method’s AUC, and sensitivity (evaluated using both disease target pathways and KO data) | GSEA (bias), PADOG (prioritization), ROntoTools (AUC), and CePaGSA (p-values). Author’s general recommendation: ROntoTools |
Ma et al. [28] | FCS, PT, and NI methods | 9 methods | Ranking of empirical powers | DEGraph, followed by PathNet and NetGSA |
Zyla et al. [29] | ORA, FCS, and SS methods | 9 methods | Sensitivity, FPR, prioritization, computational time, and reproducibility | PLAGE (sensitivity), ORA and PADOG (specificity/FPR), PADOG (prioritization), and CERNO (reproducibility) |
Geistlinger et al. [30] | ORA, FCS, and SS methods | 10 methods | Sensitivity, computational time, and phenotype relevance score | Author’s general recommendation: ROAST and GSVA (for self-contained hypothesis). ORA and PADOG (for competitive hypothesis) |