Skip to main content

Table 3 Three basic models use different types of pre-trained word embeddings to predict performance

From: Refining electronic medical records representation in manifold subspace

Method

Embedding

Macro AUC

Micro AUC

Macro F1

Micro F1

Test loss value

Top-10 recall

RNN

Random

0.854

0.972

0.204

0.653

0.032

0.772

FastText

0.842

0.973

0.149

0.628

0.032

0.774

Glove

0.861

0.974

0.219

0.656

0.031

0.788

Word2Vec

0.851

0.974

0.165

0.642

0.031

0.783

BERT

0.500

0.908

0.000

0.000

0.061

0.442

ALBERT

0.503

0.915

0.026

0.018

0.054

0.446

BioBERT

0.513

0.923

0.051

0.038

0.052

0.457

BlueBERT

0.533

0.939

0.075

0.043

0.050

0.471

Ours

0.857

0.976

0.182

0.659

0.030

0.793

CNN

Random

0.825

0.968

0.214

0.626

0.040

0.753

FastText

0.665

0.921

0.012

0.223

0.053

0.488

Glove

0.842

0.972

0.188

0.622

0.034

0.767

Word2Vec

0.692

0.925

0.021

0.313

0.052

0.492

BERT

0.549

0.906

0.000

0.000

0.059

0.442

ALBERT

0.556

0.914

0.014

0.012

0.053

0.453

BioBERT

0.559

0.921

0.015

0.041

0.047

0.459

BlueBERT

0.567

0.929

0.021

0.047

0.042

0.464

Ours

0.852

0.974

0.217

0.628

0.038

0.779

CAML

Random

0.855

0.978

0.257

0.656

0.032

0.806

FastText

0.856

0.980

0.270

0.656

0.031

0.809

Glove

0.867

0.978

0.272

0.647

0.033

0.801

Word2Vec

0.855

0.980

0.274

0.662

0.030

0.813

BERT

0.497

0.908

0.000

0.000

0.058

0.442

ALBERT

0.505

0.916

0.026

0.022

0.054

0.457

BioBERT

0.513

0.924

0.045

0.041

0.048

0.465

BlueBERT

0.534

0.934

0.060

0.076

0.042

0.478

Ours

0.886

0.982

0.270

0.673

0.029

0.823

  1. Bold values denote the best result for each row of data(%)