A method for named entity normalization in biomedical articles: application to diseases and plants

Background In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine’s Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. Results In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. Conclusions The proposed approach shows robust performance for normalizing biological entities. The manually constructed plant corpus and the proposed model are available at http://gcancer.org/plant and http://gcancer.org/normalization, respectively. Electronic supplementary material The online version of this article (doi:10.1186/s12859-017-1857-8) contains supplementary material, which is available to authorized users.


Selection of abstracts
For the selection of candidate abstracts that will be annotated by two curators, we extracted 13,408,621 PubMed abstracts using PubTator [1]. Then, pre-processing steps were performed as follows: 1 For the annotation of plant names in abstracts, it is necessary to identify an entity mention with its identifier and the offset of the location of each entity mention in texts. Thus, we used a well-known named entity recognition tool because it is hard to manually annotate all plant mentions in whole abstracts. For annotating plant names, we applied Lingpipe [2] to PubMed abstracts for locating plant names using NCBI taxonomy [3]. 2 As a result of the first step, we obtained 773,312 abstracts and then randomly selected 208 candidate abstracts to construct the plant corpus. 3 When performing annotation tasks, we applied a BRAT system [4] that is a web-based tool designed for text annotation.

Curator guidelines
The guidelines for curators were designed to annotate all strings that can be identified as plant names. We performed manual annotation of all plant mentions from 208 abstracts. For each entity occurrence, we annotated its text span and assigned a corresponding concept identifier from NCBI taxonomy [3]. Although the main focus of annotation is to annotate plant mentions, low taxonomic levels (e.g. family, tribe, genus, species) were also considered. The guideline for plant mentions consists of two following sub-sections. • NCBI taxonomy identifiers [3] are used for plant identifiers.
• If a plant mention has more than one identifier, annotators should annotate all identifiers by distinguish them with a "|" symbol. (e.g. maize 4577|381124) • When an annotator cannot find a proper identifier of the plant mention from the taxonomy dictionary, the annotator should search other synonyms using the Google search engine.
• When an annotator cannot search a proper identifier even using Google, the annotator should annotate their botanical low-level concepts (from Family to lowest) using Wikipedia (e.g. Gynura cusimbua {Family-Tribe-Genus: Asteraceae-Senecioneae-Gynura Cass.} [4210-102812-109564]) • When an annotator cannot find a proper identifier anymore, the plant identifier should be annotated as "NA." Inter-annotator Agreement (IAA) For plant entity annotation, we recruited two curators who have previous experience for annotation. We used a Jaccard coefficient score to measure an IAA score that represents the consistency of annotations. A 1 and A 2 indicate the set of annotated results by the first and second annotators, respectively. Annotated mentions having the same PMID, start and end points, and taxonomy identifiers were considered as the agreement cases and others were counted as the disagreement cases. We measured mention-level and normalization-level accuracies. For example, if both A 1 and A 2 contain the "rosemary," it was counted as a mention-level agreement. If A 1 contains "Silkbay and camphor tree," and A 2 separately contains "Silkbay" and "camphor tree," this case was counted as disagreements in the mention-level. For the normalization-level measurement, we applied the same approach in terms of identifiers. Therefore, we can measure the Jaccard index Jac A1,A2 by counting the number of agreements as follows.

IAA analysis
An IAA score of the mention-level was obtained by considering correct identification of plant mentions regardless of labels of the taxonomy identifiers. For an IAA score of normalization-level, both plant mentions and identifiers of plant mentions should be correctly annotated. Table 1 shows overall IAA scores for annotations.

Disagreement and harmonization
We compared annotations from two curators to harmonize the annotated results.
For the mention-level annotation, 61 out of 3,997 candidate mentions were disagreed between annotators with the following cases; one annotator incorrectly annotated (six false positives, 9.8%), one annotator did not annotate an actual plant mention (17 false negatives, 27.9%), and one annotator partially annotated (38 cases, 62.3%).
We introduce examples of the discrepant cases and resolution of disagreements.
• Example 1. -For the sentence in Example 3, one annotator assigned "Silkbay and camphor tree" as a plant mention while the other annotator annotated "Silkbay" and "camphor tree" as two separate plant names. Because "Silkbay" and "camphor tree" are different plants, "Silkbay and camphor tree" was separately annotated as two plant mentions: "Silkbay" and "camphor tree." For the normalization-level annotation, 441 annotations were in disagreement. The disagreements were largely divided into the following three cases: 26 unmapped identifiers (5.9%), 261 partially matched identifiers (59.2%), and 154 mismatches (34.9%). Two annotators resolved disagreement by discussion and reached on agreement for all plant corpus. As a result, Table 2 shows details of the plant corpus.