A kernelbased approach for detecting outliers of highdimensional biological data
 Jung Hun Oh^{1} and
 Jean Gao^{1}Email author
https://doi.org/10.1186/1471210510S4S7
© Oh and Gao; licensee BioMed Central Ltd. 2009
Published: 29 April 2009
Abstract
Background
In many cases biomedical data sets contain outliers that make it difficult to achieve reliable knowledge discovery. Data analysis without removing outliers could lead to wrong results and provide misleading information.
Results
We propose a new outlier detection method based on KullbackLeibler (KL) divergence. The original concept of KL divergence was designed as a measure of distance between two distributions. Stemming from that, we extend it to biological sample outlier detection by forming sample sets composed of nearest neighbors. KL divergence is defined between two sample sets with and without the test sample. To handle the nonlinearity of sample distribution, original data is mapped into a higher feature space. We address the singularity problem due to small sample size during KL divergence calculation. Kernel functions are applied to avoid direct use of mapping functions. The performance of the proposed method is demonstrated on a synthetic data set, two public microarray data sets, and a mass spectrometry data set for liver cancer study. Comparative studies with Mahalanobis distance based method and oneclass support vector machine (SVM) are reported showing that the proposed method performs better in finding outliers.
Conclusion
Our idea was derived from Markov blanket algorithm that is a feature selection method based on KL divergence. That is, while Markov blanket algorithm removes redundant and irrelevant features, our proposed method detects outliers. Compared to other algorithms, our proposed method shows better or comparable performance for small sample and highdimensional biological data. This indicates that the proposed method can be used to detect outliers in biological data sets.
Background
Outlier detection is an active research area that has many applications such as network intrusion detection [1], fraud detection [2] and biomedical data analysis [3]. In particular, outliers caused from instrument error or human error in the biomedical data analysis such as biomarker selection and disease diagnosis could deeply degrade the performance of the data analysis. Therefore, prior to the analysis, during preprocessing it is imperative to remove outliers to prevent wrong results. To detect such anomalous observations from normal ones, data mining techniques are widely used.
Outlier detection has been studied by researchers using a diversity of approaches. Statistical methods often view objects that are located relatively far from the center of the data distribution as outliers. Several distance measures were implemented. The Mahalanobis distance is the most commonly used multivariate outlier criterion. Based on Akaike's Information Criterion (AIC), Kadota et al. developed a method for detecting outliers, which is free from a significance level [4]. Knorr and Ng introduced a distancebased approach in which outliers are those objects for which there are less than k points within a given threshold in the input data set [5, 6]. Angiulli et al. proposed a distancebased outlier detection method which finds the top outliers and provides a subset of the data set, called outlier detection solving set, that can be used to predict if new unseen objects are outliers [7]. Distancebased strategies are advantageous since model learning is not required. As an alternative, clustering algorithms can be used for outlier detection in which objects that do not belong to any cluster are regarded as outliers. Wang and Chiang proposed an effective cluster validity measure with outlier detection and cluster merging strategies for support vector clustering (SVC) [8]. The validity measure is capable of finding suitable values for the kernel parameter and soft margin constant. Based on these parameters, SVC algorithm can identify the ideal cluster number and increase robustness to outliers and noises. Schölkopf proposed a method of adapting support vector machine (SVM) to oneclass classification problems [9]. Manevitz and Yousef presented two versions using the oneclass SVM, both of which can identify outliers: Schölkopf's method and their proposed suggestion [10]. In such methods, after mapping the original samples into a feature space using an appropriate kernel function, the origin is referred to as the second class. In the feature space, samples close to the origin or lying on the standard subspace such as axes are regarded as outliers. Bandyopadhyay and Santra applied a genetic algorithm to the outlier detection problem in a lower dimensional space of a given data set, dividing these spaces into grids and efficiently computing the sparsity factor of the grid [11]. Aggarwal and Yu studied the problem of outlier detection for highdimensional data, which works by finding lower dimensional projections [12]. Malossini et al. proposed two methods for detecting potential labeling errors: Classificationstability algorithm (CLstability) and LeaveOneOutErrorsensitivity algorithm (LOOEsensitivity) [13]. In CLstability, the stability of the classification of a sample is evaluated with a small perturbation of the other samples. LOOEsensitivity was derived from the fact that if a sample is mislabeled, flipping the label of the sample should improve the prediction power.
In this paper, we propose a new outlier detection method based on KL divergence [14]. Due to the possible nonlinearity of data structure, we deal with this problem in a higher feature space rather than the original space. Several issues arise after data mapping such as singularity because of small sample size versus high feature dimension. We address the computational issues and show the effectiveness of the proposed approach, KL divergence for outlier detection (KLOD).
Methods
Markov blanket
where f_{ M i }and f_{ i }are feature values to M_{ i }and F_{ i }, respectively, c is the class label, and D(..) represents the crossentropy (a.k.a. KullbackLeibler divergence). For each feature, Δ value is computed and a feature with the smallest Δ value is eliminated from the whole feature set. With the remaining features, the procedure is repeated until a predefined number of features remains.
KullbackLeibler (KL) divergence
Concept of KL divergence for outlier detection (KLOD)
Given a data set with nonlinear data structure, if we model the linearity for the data set, it will cause our strategy to fail. Here, we focus on modeling the nonlinearity. Accordingly, with a mapping function ϕ, the original space is mapped into a higher dimensional feature space. Let ${S}_{1}^{\Phi}$ and ${S}_{2}^{\Phi}$ denote the two sample sets in the feature space in which we compute the similarity D(${S}_{1}^{\Phi}$${S}_{2}^{\Phi}$) between ${S}_{1}^{\Phi}$ and ${S}_{2}^{\Phi}$. For each sample, its D(${S}_{1}^{\Phi}$${S}_{2}^{\Phi}$) is calculated. A sample which has the largest D(${S}_{1}^{\Phi}$${S}_{2}^{\Phi}$) is referred to as an outlier.
Kernel function
Suppose that {x_{1}, x_{2}, ⋯ x_{ n }} are the given samples in the original space. After mapping the samples into a higher feature space by a nonlinear mapping function ϕ, the samples in the feature space are observed as Φ_{ mxn }= [ϕ(x_{1}), ϕ (x_{2}), ⋯, ϕ (x_{ n })] where m is the number of features. Denote K as follows:
K = Φ^{ T }Φ.
where if i ≠ j, Φ_{ i }and Φ_{ j }are different sample sets in the feature space; if i = j, K_{ ij }is equivalent to the definition of K. Indeed, the feature space and the mapping function may not be explicitly known. However, once the kernel function is known, we can easily deal with the nonlinear mapping problem by replacing the mapping functions by the kernel functions.
Singularity problem
where B = JM^{1}J^{T} and M = ρ I_{ n }+ W^{ T }W = ρ I_{ n }+ J^{ T }Φ^{ T }ΦJ = ρ I_{ n }+ J^{ T }KJ.
Definition (Woodbury formula): Let A be a square r × r invertible matrix, where U and V are two r × k matrices with k ≤ r. Assume that a k × k matrix Σ = I_{ k }+ β V^{ T }A^{1}U, in which I_{ k }denotes a k × k identity matrix and β is an arbitrary scalar, is invertible. Then
(A + β UV^{ T })^{1} = A^{1}  β A^{1}UΣ^{1}V^{ T }A^{1}.
Calculation of KL divergence
It should be noted that as shown in Eq. (9), Eq. (12) and Eq. (13), μ_{ i }, C_{ i }and ${C}_{i}^{1}$ (i = 1 or 2) have mapping functions rather than kernel functions.
As a result of the effort, all mapping functions in the first term are replaced with kernel functions. Before dealing with the second term, we want to introduce the following three properties of determinant that are essential in the calculation of the second term.
Properties of determinant
 (a)
If A is an rbyr matrix, detd A = detd I_{ r }A = d^{ r }detA.
 (b)
If A and B are kbyr matrices, detI_{ k }+ AB^{ T } = detI_{ r }+ B^{ T }A.
 (c)
If A is invertible, detA^{1} = 1/detA.
Note that the size of ${S}_{1}^{\Phi}$ is one larger than that of ${S}_{2}^{\Phi}$. If the size of ${S}_{2}^{\Phi}$ is k, the size of ${S}_{1}^{\Phi}$ becomes k + 1.
Successfully, we substitute all mapping functions in the three terms of KL divergence by kernel functions so that we can calculate KL divergence between two sample sets in the feature space.
Results and discussion
where Σ and μ are the sample covariance matrix and sample mean vector, respectively. Samples with a large Mahalanobis distance are regarded as outliers.
Results on synthetic data
Performance evaluation after outlier removal
Before introducing the outlier removal for real biomedical data, we first introduce the performance evaluation metric we will use which is PCA (principal component analysis) + LDA (linear discriminant analysis). LDA maps the data into a very low dimensionality of c 1, where c is the number of classes. In the reduced space, a simple matching procedure is used for classification. However, in order to guarantee a nondegenerate result from LDA, before the LDA task, the dimensionality of the data must be reduced to at most n  c where n is the number of samples. Principal component analysis (PCA) is often used in the analysis of high dimensional data set. PCA performs a transformation of the original space into a lower dimensional space with little or no information loss while maximally preserving variance.
Lilien et al. used the PCA+LDA method in the analysis of mass spectrometry data sets [18]. In this framework, the PCA dimensionalityreduced samples are projected by LDA onto a hyperplane in the way of maximizing the betweenclass variance and minimizing the withinclass variance of the projected samples. To evaluate the performance after outlier removal in our experiments, we employed the PCA+LDA strategy.
Results on gene expression data sets
In this study, two public microarray data sets were used.

The leukemia data set covers two types of acute leukemia: 47 acute lymphoblastic leukemia (ALL) samples and 25 acute myeloid leukemia (AML) samples with 7,129 genes. The data set is publicly available at http://www.broad.mit.edu/cgibin/cancer/datasets.cgi/[19].

The colon data set contains 40 tumor and 22 normal colon tissues with 2,000 genes. The data set is available at http://microarray.princeton.edu/oncology/[20].
In experiments with the two microarray data sets, specificity, sensitivity, and accuracy were measured using PCA+LDA classification strategy after removing outliers detected by KLOD with t = 10, Mahalanobis distance based method, and oneclass SVM. We define the specificity as the ratio of correctly classified negatives to the actual number of negatives. For leukemia and colon microarray data sets, negatives are ALL and normal samples, respectively. For KLOD and Mahalanobis distance based method, the performance was measured after removing a sample having the largest distance from each class at each iteration. If the prediction rate (specificity or sensitivity) decreases more than a threshold γ compared to the prediction rate before the outlier removal, we stop the outlier detection in the corresponding class. In this study, we used γ = 0.5%. In contrast, for oneclass SVM, after excluding all samples regarded as outliers in each class, the performance was assessed.
Performance after outlier detection in leukemia and colon data sets.
Data set  Measurements  Without outlier removal  After outlier removal  

KLOD  Mahalanobis  Oneclass SVM  
Leukemia  Specificity (%)  96.17  99.00  97.37  100 
Sensitivity (%)  95.60  99.44  100  95.24  
Accuracy (%)  95.97  99.13  98.28  98.33  
No. of the outliers  ALL  2  9  8  
AML  7  5  4  
Colon  Specificity (%)  82.50  85.95  83.25  85.26 
Sensitivity (%)  88.25  94.43  85.90  94.17  
Accuracy (%)  86.21  91.25  85.00  91.09  
No. of the outliers  normal  1  2  3  
tumor  5  1  4 
Mahalanobis distance based method and oneclass SVM found 14 and 12 outliers, respectively. For the colon data set, KLOD found 6 outliers (1 normal and 5 tumor samples) with 84.95% specificity, 94.43% sensitivity, and 91.25% accuracy. It should be noted that the performance of Mahalanobis distance based method was degraded in terms of sensitivity and accuracy compared to the performance obtained using all samples without outlier removal, suggesting that outliers detected by Mahalanobis distance based method are unlikely to be real ones.
Results on mass spectrometry data
Performance after outlier detection in liver cancer mass spectrometry data.
Measurements  Without outlier removal  After outlier removal  

KLOD  Mahalanobis  Oneclass SVM  
Specificity (%)  93.63  94.69  94.29  94.35 
Sensitivity (%)  92.82  93.95  93.51  93.89 
Accuracy (%)  93.14  94.23  93.82  94.07 
No. of the outliers  Cirrhosis  3  2  5 
HCC  2  4  6 
Conclusion
We proposed a new outlier detection method based on KL divergence called KLOD. Our idea was derived from Markov blanket algorithm where redundant and irrelevant features are removed based on KL divergence. We tackled the outlier detection problem in a higher feature space after mapping the original data. The mapping leads to several issues. In particular, we showed how to calculate KL divergence in the higher feature space by using the properties of determinant and trace of matrix. To asses the usefulness of KLOD, we used a synthetic data and real life data sets. Compared to Mahalanobis distance based method and oneclass SVM, KLOD achieved higher or comparable performance.
Declarations
Acknowledgements
This work was supported in part by NSF under grants IIS0612152 and IIS0612214.
This article has been published as part of BMC Bioinformatics Volume 10 Supplement 4, 2009: Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2008. The full contents of the supplement are available online at http://www.biomedcentral.com/14712105/10?issue=S4.
Authors’ Affiliations
References
 Lee W, Stolfo S, Mok K: Mining audit data to build intrusion detection models. Proc Int Conf Knowledge Discovery and Data Mining (KDD 1998). 1998, 6672.Google Scholar
 Fawcett T, Provost F: Adaptive fraud detection. Data Mining and Knowledge Discovery. 1997, 1: 291316.View ArticleGoogle Scholar
 Ressom H, Varghese R, Drake S, Hortin G, AbdelHamid M: Peak selection from MALDITOF mass spectra using ant colony optimization. Bioinformatics. 2007, 23: 619626.View ArticlePubMedGoogle Scholar
 Kadota K, Tominaga D, Akiyama Y, Takahashi K: Detecting outlying samples in microarray data: A critical assessment of the effect of outliers on sample classification. ChemBio Informatics Journal. 2003, 3: 3045.View ArticleGoogle Scholar
 Knorr E, Ng R: Algorithms for mining distancebased outliers in large datasets. Proc Int Conf Very Large Databases (VLDB 1998). 1998, 392403.Google Scholar
 Knorr E, Ng R, Tucakov V: Distancebased outlier: algorithms and applications. Proc Int Conf Very Large Databases (VLDB 2000). 2000, 237253.Google Scholar
 Angiulli F, Basta S, Pizzuti C: Distancebased detection and prediction of outliers. IEEE Trans on Knowledge and Data Engineering. 2006, 18: 145160.View ArticleGoogle Scholar
 Wang JS, Chiang JC: A cluster validity measure with outlier detection for support vector clustering. IEEE Trans on Systems, Man, and Cybernetics, Part B. 2008, 38: 7889.View ArticleGoogle Scholar
 Schölkopf B, Platt J, ShaweTaylor J, Smola A, Williamson R: Estimating the support of a highdimensional distribution. Neural Computation. 2001, 13: 14431471.View ArticlePubMedGoogle Scholar
 Manevitz L, Yousef M: Oneclass SVMs for document classification. Journal of Machine Learning Research. 2001, 2: 139154.Google Scholar
 Bandyopadhyay S, Santra S: A genetic approach for efficient outlier detection in projected space. Pattern Recognition. 2008, 41: 13381349.View ArticleGoogle Scholar
 Aggarwal C, Yu P: Outlier detection for high dimensional data. Proc ACM SIGMOD. 2001, 3746.Google Scholar
 Malossini A, Blanzieri E, Ng R: Detecting potential labeling errors in microarrays by data perturbation. Bioinformatics. 2006, 22: 21142121.View ArticlePubMedGoogle Scholar
 Oh J, Gao J, Rosenblatt K: Biological data outlier detection based on KullbackLeibler divergence. Proc IEEE Int Conf on Bioinformatics and Biomedicine (BIBM 2008). 2008, 249254.View ArticleGoogle Scholar
 Koller D, Sahami M: Toward optimal feature selection. Proc Int Conf on Machine Learnin. 1996Google Scholar
 Tumminello M, Lillo F, Mantegna R: KullbackLeibler distance as a measure of the information filtered from multivariate data. Physical Review E. 2007, 76: 25667.View ArticleGoogle Scholar
 Zhou S, Chellappa R: From sample similarity to ensemble similarity: probabilistic distance measures in reproducing kernel Hilbert space. IEEE Trans on Pattern Analysis and Machine Intelligence. 2006, 28: 917929.View ArticleGoogle Scholar
 Lilien R, Farid H, Donald B: Probabilistic disease classification of expressiondependent proteomic data from mass spectrometry of human serum. Journal of Computational Biology. 2003, 10: 925946.View ArticlePubMedGoogle Scholar
 Golub T, Slonim D, Tamayo P, Huard C, Gaasenbeek M: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999, 286: 531537.View ArticlePubMedGoogle Scholar
 Alon U, Barkai N, Notterman D, Gish K, Ybarra S: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci U S A. 1999, 96: 67456750.PubMed CentralView ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.