Skip to main content

Table 8 Comparison of run-time (in days) statistics of BioALBERT versus BioBERT

From: Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT

Model

Training time (in days)

\({\text {BioBERT}}_{{Base2}}\)

23.00

\({\text {BioBERT}}_{{Base1}}\)

10.00

\({\text {BioALBERT}}_{{Base1}}\)

3.00

\({\text {BioALBERT}}_{{Base2}}\)

4.08

\({\text {BioALBERT}}_{{Large1}}\)

2.83

\({\text {BioALBERT}}_{{Large2}}\)

3.88

\({\text {BioALBERT}}_{{Base3}}\)

4.02

\({\text {BioALBERT}}_{{Base4}}\)

4.45

\({\text {BioALBERT}}_{{Large3}}\)

4.62

\({\text {BioALBERT}}_{{Large4}}\)

4.67

  1. Refer to Table 4 for more details of BioALBERT size. \({\text {BioBERT}}_{{Base1}}\) and \({\text {BioBERT}}_{{Base2}}\) refers to BioBERT trained on PubMed and PubMed+PMC respectively