Skip to main content

Table 7 Overall retrieval performance of different models for 2006 queries

From: A supervised term ranking model for diversity enhanced biomedical information retrieval

 

Document

Passage

Passage2

Aspect

2006 queries

Language model [32]

0.3178

0.0205

0.0239

0.1983

Relevance model [33]

0.3194

0.0207

0.0240

0.2023

Term dependency [15]

0.3198

0.0208

0.0254

0.1785

SVM-based SQE [10]

0.3050

0.0237

0.0292

0.2447

ListNet [34]

0.3216

0.0234

0.0290

0.2256

RankSVM [29]

0.3065

0.0235

0.0335

0.2632

Our model

0.3282*

0.0249*

0.0345*

0.2828*

2007 queries

Language model [32]

0.2587

0.0646

0.0876

0.2000

Relevance model [33]

0.2678

0.0720

0.0963

0.2302

Term dependency [15]

0.2804

0.0683

0.0939

0.1974

SVM-based SQE [10]

0.2833

0.0729

0.0999

0.2298

ListNet [34]

0.2819

0.0739

0.1012

0.2255

RankSVM [29]

0.3226

0.0844

0.1160

0.2467

Our model

0.3337*

0.0847*

0.1155

0.2713*

  1. In the table, an asterisk (*) indicates significant improvements over the RankSVM-based model and a dagger () indicates significant improvements over the ListNet-based model