Skip to main content

Table 3 The overall experimental results on baselines, MuFRF, and the ablation study of MuFRF

From: Multi-view feature representation and fusion for drug-drug interactions prediction

Method

Binary-class

Multi-class

Accuracy

Precision

Recall

F1

Mean-Accuracy

Marco-Precision

Macro-Recall

Macro-F1

DeepWalk

0.8130

0.7970

0.8390

0.8170

0.8000

0.8220

0.7101

0.7469

LINE

0.7810

0.7710

0.8000

0.7850

0.7506

0.6870

0.5451

0.5804

DeepDDI

0.9166

0.9121

0.9241

0.9167

0.8768

0.7986

0.7593

0.7662

KGDDI

0.8926

0.8936

0.8925

0.8925

0.8923

0.7945

0.7667

0.7666

KGNN

0.9034

0.9058

0.8999

0.9029

0.9127

0.8583

0.8170

0.8291

MUFFIN

0.9913

0.9912

0.9913

0.9912

0.9648

0.9568

0.9482

0.9495

RANEDDI

–

–

–

–

0.9707

0.9260

0.9096

0.9161

GRPMF

–

–

–

–

0.9617

0.9627

0.9617

0.9622

MuFRF

0.9945

0.9945

0.9945

0.9945

0.9721

0.9765

0.9676

0.9699

MuFRF_ST

0.9509

0.9510

0.9509

0.9509

0.9524

0.9344

0.9254

0.9248

MuFRF_KG

0.9200

0.9200

0.9199

0.9199

0.9355

0.8971

0.8641

0.8724

MuFRF-c0

0.9840

0.9840

0.9840

0.9840

0.9705

0.9669

0.9570

0.9594

MuFRF-c1

0.9823

0.9823

0.9822

0.9822

0.9718

0.9754

0.9636

0.9667

MuFRF-add

0.9823

0.9823

0.9823

0.9823

0.9730

0.9684

0.9616

0.9625

MuFRF-p

0.9814

0.9815

0.9814

0.9814

0.9703

0.9662

0.9556

0.9581

MuFRF-attn

0.9821

0.9821

0.9821

0.9821

0.9700

0.9652

0.9574

0.9585

  1. The bold values represent the performance of our proposed model MuFRF outperforms all baselines on each metric for binary and multi-class classification tasks