 Research article
 Open Access
 Published:
Feature selection of gene expression data for Cancer classification using double RBFkernels
BMC Bioinformatics volumeÂ 19, ArticleÂ number:Â 396 (2018)
Abstract
Background
Using knowledgebased interpretation to analyze omics data can not only obtain essential information regarding various biological processes, but also reflect the current physiological status of cells and tissue. The major challenge to analyze gene expression data, with a large number of genes and small samples, is to extract diseaserelated information from a massive amount of redundant data and noise. Gene selection, eliminating redundant and irrelevant genes, has been a key step to address this problem.
Results
The modified method was tested on four benchmark datasets with either twoclass phenotypes or multiclass phenotypes, outperforming previous methods, with relatively higher accuracy, true positive rate, false positive rate and reduced runtime.
Conclusions
This paper proposes an effective feature selection method, combining double RBFkernels with weighted analysis, to extract feature genes from gene expression data, by exploring its nonlinear mapping ability.
Background
Gene expression data can reflect gene activities and physiological status in a biological system at the transcriptome level. Gene expression data typically includes small samples but with high dimensions and noise [1]. A single gene chip or next generation sequencing technology can detect at least tens of thousands of genes for one sample, but when it comes to some diseases or biological processes, only a few groups of genes are related [2, 3]. Moreover, testing these redundant genes not only demands tremendous search space but also reduces the performance of data mining due to the overfitting problem. Thus, extracting the diseasemediated genes from the original gene expression data has been a major problem for medicine. Moreover, the identification of appropriate diseaserelated genes will allow the design of relevant therapeutic treatments [4, 5].
So far, several feature selection methods have been suggested to extract diseasemediated genes [6,7,8]. Zhou et al. [3] proposed a new measure, LS bound measure, to address numerous redundant genes. Several statistical theories (Ï‡^{2}et al.) and classic classifiers (Support Vector Machine et al.) have been used in feature selection [9]. In general, these methods can be divided into three categories: filter, wrapper and embedded methods [9, 10]. The filter method is based on the structural information of the dataset itself, which is independent of the classifier, and it selects a feature subset from the original dataset using a certain evaluation rule based on statistical methods [11]. The wrapper method [12] is based on the performance of the classifier to evaluate the significance of feature subsets, while the embedded method [13] combines the advantage of filter and wrapper methods, selecting feature genes using a predetermined classification algorithm [14, 15]. Since the filter methods are independent of the classifier, the computational complexity of these methods is relatively low, hence, they are suitable for massive data processing [16]. Yet, wrapper methods can reach a higher accuracy, but they also have a higher risk of overfitting.
Kernel methods have been one of the central methods in machine learning in recent years. They have widely been applied to the area of classification and regression. A kernel method has the capability of mapping the data (nonlinearly) to a higher dimensional space [17]. Hence, by using the kernel method, the dimension of the observed data such as gene expression data can be significantly reduced, that is, the irrelevant genes can be filtered by kernel method, thus revealing the hidden inherent law in the biological system [18]. Characteristically, kernels have a great impact on learning and predictive results of machine learning methods [5, 19].
Although a great number of kernels exist and it is intricate to explain their distinctive characteristics, kernels used by feature extraction can be divided into two classes: global and local kernels, such as polynomial and radial basis function (RBF) kernels. The influence of different types of kernels on the interpolation and extrapolation capabilities has been investigated. In global kernels, data points far away from the test point have a profound effect on kernel values, while, by using local kernels, only those close to the test point have a great effect on kernel values. The polynomial kernel shows better extrapolation abilities at lower orders of the degrees, but requires higher orders of degrees for good interpolation, while the RBFkernel has good interpolation abilities, but fails to provide longer range extrapolation [17, 20].
KBCGS [20] is a new filter method based on the RBFkernel using weighted gene measures in clustering. This supervised learning algorithm applied global adaptive distance to avoid falling in local minima. The RBF kernel function has been proven useful when it comes to show a satisfactory global classification performance for gene selection. Yet, exploring this problem in depth definitely needs further research. A typical mixture kernel is to construct a convex combination of basis kernels. Based on the characteristics of the original kernel function, linear fusion of a local kernel function and a global kernel function can constitutes a new mixed kernel function. Several mixture kernels have been introduced in [21,22,23] to overcome limitations of singlekernel, which can enhance the interpretability of the decision, function and improve performance. Phienthrakul et al. proposed Multiscale RBF Kernels in Support Vector Machines and demonstrated that the use of Multiscale RBF Kernels could result in better performance than that of a single RBF on benchmarks [23].
In this paper, we modified KBCGS based on double RBFkernels, and applied the proposed method to feature selection of gene expression. We introduced the double RBFkernel to both SVM and KNN, and evaluated their performance in the area of gene selection. This mixture describes varying degrees of local and global characteristics of kernels only by choosing different values of Î³_{1}and Î³_{2}. We combined the double RBFkernel with a weighted method to overcome the limitations of single and local kernel. As an application, we provided a feature extraction method which uses this kernel, applying our method to several benchmark datasets: diffuse large Bcell lymphoma (DCBL) [24], colon [2], lymphoma [1], gastric cancer [25], and mixed tumors [26] to evaluate its performance. The results demonstrate that this method allows better discrimination in gene selection. In addition, the method is superior when it comes to accuracy and efficiency if we compare this technique with traditional gene selection methods.
This paper provides a brief overview of the gene selection method for expression data analysis, then, the improved KBCGS method called DKBCGS (Doublekernel KBCGS), in which the two classification methods were used for the clustering analysis was compared to six popular gene selection methods. The last section of the paper provides a comprehensive evaluation of the proposed method using four benchmark gene expression datasets.
Methods
Gene expression data with l genes and n samples can be represented by the following matrix:
X_{i}is a row vector that represents the total gene expression levels of sample i and x_{ij} is the expression level of gene j of sample i.
Cluster center
In this paper, we used Zscore to normalize the original data. The standard score Z used for a gene is as follows:
where, x is the expression level of a gene in a sample, Î¼ is the mean value of the gene across all samples, and Ïƒ is its standard deviation of the gene across all samples.
The cancer classification was formulated as a supervised learning problem, defining the cluster center as:
In this equation, Iâ€‰=â€‰1, 2,â€¦, C, jâ€‰=â€‰1,2,â€¦,n, kâ€‰=â€‰1,2,â€¦,l, C_{i} is the number of samples contained in class C_{i}, respectively. Hence, V_{i} = [v_{i1},â€¦,v_{il}] is the cluster center of class C_{i}.
Double RBFkernels
The kernel function acts as a similarity measure between samples in a feature space. A simple form of similarity measure is the dot product between two samples. The most frequently used kernel is a positive definite Gaussian kernel [27]. The classic Gaussian kernel on two samples x and x_{i}, represented as feature vectors in an input space, is defined by:
where, Î³_{1}â€‰>â€‰0 is a free parameter.
It is a positive definite kernel representing local features, therefore, it can also be used as the kernel function to weight genes for the gene selection method. Kernel methods have already been applied to many areas due to their effectiveness in feature selection and dimensionality reduction [27]. However, for the purposes of these methods, the focus is on creating a more general unified mixture kernel that has capabilities of both local and global kernels.
This work utilizes a double RBFkernel as a similarity measure. The number choice of kernels could typically depend on the level of heterogeneity of the datasets. Increasing numbers of kernels helps to improve accuracy, but increase the computational cost. Therefore, we have to find a compromise between multiple kernels learning and double RBFkernel learning, based on the performance and computational complexity. In most case, two RBF kernels are enough to handle most data with reasonable accuracy and computational cost. It should be emphasized that the proposed nonlinear kernel method is based on the combination of two RBFkernels that has few limitations when calculating the distance among genes as follows:
To further illustrate Eq. (5), the mapping relationships were plotted between the formula Eq. 5 and RBFkernel by Figs. 1 and 2. Figures 1 and 2 clearly show the fattailed shape of the mapping changes with Î³_{1} , Î³_{2} and compared to the RBF mapping parameter Î³_{1}. Figure 2 shows changing parameters Î³_{1} , Î³_{2}, the lower graph varies more slightly than the upper one. Therefore, the doublekernel can fit data better with less impact by outliers, indicating that the doublekernel has better flexibility than the singlekernel. The fat tail characteristics make the double RBF kernels have better learning ability and better generalization ability than a RBFkernel.
Kernels as measures of similarity
Suppose Î¦â€‰:â€‰Xâ€‰âŸ¶â€‰F is a nonlinear mapping from the space X to a higher dimensional space F, By applying the mapping Î¦, then the dot product \( {\mathrm{x}}_{\mathrm{k}}^{\mathrm{T}}{\mathrm{x}}_{\mathrm{l}} \) in the input space X is mapped to Î¦(x_{k})^{T}Î¦(x_{l}) in the new feature space. The key idea in kernel algorithms is that the nonlinear mapping Î¦ doesnâ€™t need to be explicitly specified because each Mercer kernel can be expressed as:
that is usually referred to as kernel trick [22]. Then, the Euclidean distances in F yields:
Then, a dissimilarity function between an sample and a cluster centroid could be defined as:
Gene ranking and selection
The most used gene selection methods belong to the socalled filter approach. Filterbased feature ranking methods rank genes independently without any learning algorithm. Feature ranking consists of weighting each feature according to a particular method, then selecting genes based on their weights.
In this paper, our method DKBCGS is based on a KBCGS method improved to achieve higher accuracy and converge faster.
The KBCGS method adopted global distance, assigning different weights to different genes. The clustering objective function is given by:
where wâ€‰=â€‰(w_{1}, w_{2},...,w_{l}) are the weight of genes.
As shown in Eq. (1), the first part is the sum of weighted dissimilarity distance among samples and the cluster they belong to evaluated by the kernel method. This part will reach its minimum value only when there is one gene that is completely relevant and the others are irrelevant. The second part is the sum of squared weights of genes, which will only reach its minimum value when all genes are equally weighted. Therefore, by combining these two parts, the optimal gene weights are obtained, then the feature genes can be selected.
To minimize J with respect to the restriction Eq. (10), the Lagrange multipliers methods were applied as follows:
So, the partial derivative of J(w_{k},â€‰Î») is given by:
The J(w_{k},â€‰Î») reaches its minimum when the value of the partial derivative is zero. So, w is calculated as follows:
Based on Eq. (13), the KBCGS method chooses \( \frac{1}{\mathrm{l}} \) as the initial weight of w_{k}. In the second part of Eq. (9), the choice of Î´ is quite important since it represents the distance of genes. The value of Î´ should ensure that both parts are of the same order of magnitude, so according to SCAD algorithm [28], the Î´ is calculated iteratively as follows:
Where Î± is a constant which influences the value of Î´, with a default value of 0.0S5. The Gaussian kernel is employed in this algorithm:
Where, Î³_{1}â€‰>â€‰0 is a free parameter and the distance can be expressed as:
The max number of iteration is 100, and Î¸â€‰=â€‰10^{âˆ’â€‰6}.
The features of the improved method are outlined below. Similar to KBCGS algorithm [20], the clustering objective function is defined:
\( J={\sum}_{i= 1}^C{\sum}_{x_j\in {C}_i}{\phi}^2\left({x}_j,{v}_i\right)+\delta {\sum}_{k= 1}^l{w}_k^2 \)
where wâ€‰=â€‰(w_{1}, w_{2},...,w_{l}) are the weight of genes.
The DKBCGS method calculates Î´ iteratively according to Chenâ€™s approach [20], however, it is improved the iterative method to calculate w by deriving the following formula:
and instead of Gaussian kernel, the double RBFkernel is used as mentioned in Eq. (5).
The initial value of Î´ in Eq. (13) is important in our algorithm since it reflects the importance of the second term relative to the first term. If Î´ is too small, the only one feature in cluster i will be relevant and assigned a weight of one. All other feature will be assigned zero weights. On the other hand, if Î´ is too large, then all feature in cluster I will be relevant, and assigned equal weights of 1/n. The values of Î´ should be chosen such that both terms are of same order of magnitude. In all examples described in this paper, we compute Î´ iteratively using Eq. (17) as SCAD method, see [28].
Through improving the iteration method, we achieve less iteration, therefore an improvement toward convergence compared to the KBCGS method. As previously mentioned, gene expression datasets are often linearly nonseparable, so choosing an appropriate nonlinear kernel to map the data to a higher dimensional space has been proven efficient.
Implementation
The algorithm can be stated using the following pseudocode:
Input: Gene expression dataset X and class label vector y;
Output: weights vector w of genes;
Use Zscore to normalize the original data X;
Use Eq. (3) to calculate the cluster center of different class of genes in the input space, respectively;
Use Eq. (8) to calculate the dissimilarity between the genes and their cluster center of class;
Initial value: w_{0} =\( \frac{1}{\mathrm{l}} \);
Repeat:
Use Eq. (14) to find the (tâ€‰+â€‰1)th distance parameter Î´^{(tâ€‰+â€‰1)};
Use Eq. (13) to calculate (tâ€‰+â€‰1)th weights w^{(tâ€‰+â€‰1)} of genes;
Use Eq. (11) to calculate (tâ€‰+â€‰1)th objective function J^{(tâ€‰+â€‰1)};
Until: J^{(tâ€‰+â€‰1)}J^{(t)} <â€‰Î¸.
Return w^{(tâ€‰+â€‰1)}.
We constructed SVM and KNN classifiers for each dataset. These methods have been introduced in the Additional file 2. A 10fold cross validation was used as the validation strategy to reduce the error and obtain classification accuracy.
The whole experiment was performed using MATLAB. To determine the value of hyperparameters, we use the grid search method. Figure 3 shows the change of in the average error rate with the change in the number of selected feature genes by employing DKBCGS. It is obvious that there is a great improvement in the results when the selected feature genes number increases from 1 to 20. In order to identify the optimal performance of all datasets, the number was restricted from 1 to 50.
Results
To validate the performance of DKBCGS method, it was compared with some commonly used filterbased feature ranking methods namelyÏ‡^{2}Statistic, Maximum relevance and minimum redundancy (MRMR), ReliefF, Information Gain and Fisher Score. These methods have been introduced in the Additional file 1. Also, the improved approach was compared with KBCGS [20].
Dataset description
The four datasets used as benchmark examples in this work are shown in Table 1. The specifics of these datasets are outlined in the Additional file 3.
Discussion
By using the twoclass datasets, the performance of proposed method, in comparison to the other six methods, was evaluated by calculating the accuracy (ACC), the true positive rate (TPR) and the true negative rate (TNR).
Table 2 and Table S1 shows the results of the twoclass datasets. These results indicate that the proposed method has high accuracy and short runtime in both the SVM and KNN classifier, while MRMR also performs well in the KNN classifier. Fig. S1 tell us that the expression of the characteristic genes selected by the proposed algorithm has significant differences in the expression level of normal/diseased samples.
Geneset enrichment analysis is used to identify coherent genesets. Fig. 5 show us that the genes (dataset: colon cancer), selected by DKBCGS, enriched in strongly connected genegene interaction networks and in highly significant biological processes. Furthermore, the significant difference between the expression profiles for the topranked genes selected by DKBCGS in the form of a color map in Fig. 6 (a) and the expression profiles for eight genes chosen randomly from the base is presented in Fig. 6 (b) confirms the good performance of the proposed selection procedure.
Classification accuracy
TP, TN, FP, FN are the True Negatives, True Positives, False Negatives and False Positives, respectively.
As the number of positive samples and negative samples using the twoclass datasets are not equal, the true positive rate (TPR) and the true negative rate (TNR) were used as another strategy for measuring the performance, considering both the precision and the recall of the experiment under test. Precision represents the number of correct positive results divided by the number of all positive results. Recall is the number of correct positive results divided by the number of positive results that should have been returned. Therefore, the TPR and false positive rate (FPR) are calculated as follows:
True positive rate
True negative rate
Table 2 shows the results of the twoclass datasets. The runtime of DKBCGS, being less than 0.1 s, is much shorter than others, except for runtime of MRMRSVM in the DLBCL dataset, that is, the proposed doublekernel model can efficiently reduce computation complexity. Regarding accuracy, the proposed method also performs well, reaching 100% in SVM classifier and slightly less than that of MRMR in KNN classifier. Taken together, these results indicate that the proposed method has high accuracy and short runtime in both the SVM and KNN classifier, while MRMR also performs well in the KNN classifier. Also, the average ROC (Receiver Operating Characteristic) curve was plotted for further evaluation in Fig. 4. A further comparison with KBCGS in four datasets, calculating average results of KNN and SVM, is shown in Additional file 4: Table S1. The results clearly demonstrate that the improved approach DKBCGS performs better in both runtime and accuracy.
Regarding the gastric cancer dataset, we have mapped the multidimensional observations into 2dimensional space formed by the two most important principal components.
Two cases have been investigated. The first approach deals with using the original vectors only containing 50 genes selected by the fusion procedure. Fig. 5(a) depicts this case in which only the best representative genes in the vector x are used. For comparison, the Principal component analysis (PCA) was repeated for the fullsize original 2000 element vectors containing all genes. The graphical results of the sample distribution are presented in Fig. 5(b). Large bold symbols of the circle and x represent the centroids of the data belong to two classes.
Furthermore, the first fifty topranked gene expression levels were analyzed in the gastric cancer dataset using the various methods as shown in Additional file 5: Figure S1. It can be clearly seen that the expression of the characteristic genes selected by the proposed algorithm has significant differences in the expression level of normal/diseased samples, therefore has some research value.
Geneset enrichment analysis
Geneset enrichment analysis is useful to identify coherent genesets, such as pathways, that are statistically overrepresented in a given gene list. Ideally, the number of resulting sets is smaller than the number of genes in the list, thus simplifying interpretation. However, the increasing number and redundancy of genesets used by many current enrichment analysis resources work against this ideal. Genesets are organized in a network, where each set is a node and links the representative gene overlap between sets [26]. So, as to dataset DLBCL, the genes selected by DKBCGS enriched in strongly connected genegene interaction networks and in highly significant biological processes (Fig. 6).
To illustrate the results in a graphical form, the expression levels of the selected genes (dataset: colon cancer) are presented in Fig. 7(a). This figure shows the image of the expression profiles for the topranked genes selected by DKBCGS in the form of a colormap. The vertical axis represents observations and the horizontal axis represents the genes arranged according to their importance. There is a visible border between the cancer group and the normal group. For comparison purposes, the image of the expression profiles for eight genes chosen randomly from the base is presented in Fig. 7(b). There is a significant difference between both images, which confirms the good performance of the proposed selection procedure.
Both Table 3 and Table S2 show the results of the multiclass datasets. Both tables clearly show that the KBCGS can reduce runtime with high accuracy in other multiclass datasets. When using the lung cancer gene expression data, there is a substantial improvement in the accuracy of the classification using the double RBFkernel algorithm for each of the feature subsets, which demonstrates that the KBCGS method can select the appropriate genes efficiently compared to other methods. For lung cancers, the feature genes selected by the double RBFkernel algorithm also result in a higher accuracy. It not only improves the accuracy of the classification of gene expression data, but also identifies informative genes that are responsible for causing diseases. Therefore, the double RBFkernel method is better than the Î§2Statistics, MRMR, ReliefF, Information Gain, and KruskalWallis test. Also, the significant difference between the expression profiles for the topranked genes (dataset: Lymphoma) selected by DKBCGS in the form of a color map in Fig. 8 (a) and the expression profiles for 20 genes chosen randomly from the base is presented in Fig. 8 (b) demonstrates the good performance of the proposed selection procedure.
Comparison of multiclass datasets
For the multiclass datasets, the performance of all methods was evaluated by computing accuracy (ACC) and run time (Time). The results are shown in Table 3. Also, further comparisons were made with KBCGS in other multiclass datasets, see Additional file 4: Table S2. Both tables clearly show that the proposed method can reduce runtime with high accuracy.
When using the lung cancer gene expression data, there is a substantial improvement in the accuracy of the classification using the double RBFkernel algorithm for each of the feature subsets, which demonstrates that the double RBFkernel method can select the appropriate genes efficiently compared to other methods. For lung cancers, the feature genes selected by the double RBFkernel algorithm also result in a higher accuracy. It not only improves the accuracy of the classification of gene expression data, but also identifies informative genes that are responsible for causing diseases. Therefore, the double RBFkernel method is better than the Î§^{2}Statistics, MRMR, ReliefF, Information Gain, and KruskalWallis test. Also, the Information Gain method turns out to be highly competitive.
In the second part of the experiment, the expression level of the selected genes (dataset: Lymphoma) was represented as before in Fig. 8(a). It shows the expression profiles for the topranked genes selected by fusion in the form of the colormap. There is a visible border between the different groups. Note that the images of the expression profiles for 20 genes are chosen randomly, see Fig. 8(b). There is a significant difference between both images, which demonstrates the performance of the proposed selection procedure.
Differential gene expression analysis
The top 50 genes of Gastric cancer dataset were analyzed by applying the paired ttest method to obtain the tscore, pvalue plot and the quantilequantile plot of these genes. The quantilequantile plot is mainly for identifying the gene expression levels of two classes. The results, as shown in Figs. 9 and 10, clearly show the difference between the feature genes obtained by DKBCGS and the original data. All the genes were divided into genes with significant attributes, and have a low pvalue (average pvalueâ€‰=â€‰0.023). Finally, this proves that DKBCGS has a certain statistical significance.
The tscore plot shows the normality of the data and the rationality of using the paired ttest. We can also conclude from the histogram of pvalue that the paired ttest is significant because of the vast majority of pvalue falls in the very end of the group of the histogram.
Between two groups of variables, a ttest is performed on each gene to identify significant differences in all genes and feature genes selected by our method, and a normal quantile map can be obtained by tscores. A histogram of tscores and pvalues was used to study the test results.
Conclusion
The number choice of kernels could typically depend on the level of heterogeneity of the datasets. Experiments on gene expression datasets show that double RBFkernel outperforms all other used feature selection methods in terms of classification accuracies for both twoclass datasets and multiclass datasets, especially in those datasets with small samples. The performances of double RBFkernel learning in classification make it well suited alternatives to one RBFkernel learning.
The use of known performance measures, such as accuracy, TNR, and TPR, clearly showed the high potential of the proposed method for performing classification tasks in bioinformatics and related disciplines. The initial value of Î´ as a ranking criterion was a key issue here for performing feature gene selection. In this paper, a flexible model for cancer gene expression classification and feature gene selection was proposed, which can adjust the parameters when using different datasets through cross validation to achieve the best result. The performance of the proposed method was compared to six classical methods, demonstrating that it could outperform existing methods in the identification of feature cancer genes. In conclusion, the proposed method is superior in accuracy and runtime for both twoclass datasets and multiclass datasets, especially for those datasets with small samples. Furthermore, the results show that our method is computationally efficient. Also, the doublekernel learning may not be good at handling a super large scale of data. Future work could investigate computational aspects more indepth on a large scale and use graphbased kernels to process gene networks.
Abbreviations
 ACC:

Accuracy
 DCBL:

Diffuse large Bcell lymphoma
 GRBF:

Gaussian radial basis function
 KNN:

KNearest Neighbor
 MRMR:

Maximum relevance and minimum redundancy
 PCA:

Principal component analysis
 POLY:

Polynomial kernel function
 RBF:

Radial Basis Function
 SRBCTs:

Small round blue cell tumors
 SVM:

Support Vector Machine
 TNR:

True negative rate
 TPR:

True positive rate
References
Alizadeh A, et al. Distinct types of diffuse large Bcell lymphoma identified by gene expression profiling. Nature. 2000;403:503â€“11.
Alon U, et al. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci. 1999;96:6745â€“50.
Zhou X, Mao KZ. LS bound based gene selection for DNA microarray data. Bioinformatics. 2005;21(8):1559â€“64.
Vapnik V. The nature of statistical learning theory. New York: Springer; 1995.
Dhavala SS, Datta S, Mallick BK. Bayesian modeling of MPSS data: gene expression analysis of bovine Salmonella infection. Publ Am Stat Assoc. 2010;105(491):956â€“67.
Kira K, Rendell LA. A practical approach to feature selection. Int Workshop Mach Learn. 1992;48(1):249â€“56.
Chater N, Oaksford M. Information gain and decisiontheoretic approaches to data selection: response to Klauer. Psychol Rev. 1999;106:223â€“7.
Peng H, Ding C, Long F. Minimum redundancy maximum relevance feature selection. Bioinforma Comput Biol. 2005;3(2):185â€“205.
Saeys Y, et al. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507â€“17.
Blum A, Langley P. Selection of relevant features and examples in machine learning. Artif Intell. 1997;97:245â€“71.
Jacobs IJ, Skates SJ, Macdonald N. Screening for ovarian cancer: a pilot randomised controlled trial. Lancet. 1999;353(9160):1207â€“10.
Xiong M, et al. Biomarker identification by feature wrappers. Genome Res. 2001;11(11):1878â€“87.
Guyon I, et al. Gene selection for cancer classification using support vector machines. Mach Learn. 2002;46:389â€“422.
Kim D, Lee K, Lee D. Detecting clusters of different geometrical shapes in microarray gene expression data. Bioinformatics. 2005;7(1):3.
Duval B, Hao JK. Advances in metaheuristics for gene selection and classification of microarray data. Brief Bioinform. 2010;11(1):127â€“41.
Brenner S, Johnson M, Bridgham J. Gene expression analysis by massively parallel signature sequencing (MPSS) on microbead arrays. Nat Biotechnol. 2000;18(6):630â€“4.
Bernhard S, Alexander JS. Learning with kernels. Cambridge: MIT Press; 2002.
Audic S, Claverie JM. The significance of digital gene expression profiles. Genome Res. 1997;7(10):986.
Hanczar B, Dougherty ER. Classification with reject option in gene expression data. Bioinformatics. 2008;24(17):1889â€“95.
Chen H, Zhang Y, Gutman I. A kernelbased clustering method for gene selection with gene expression data. J Biomed Inform. 2016;62:12â€“20.
Smits GF, Jordan EM. Improved SVM regression using mixtures of kernels. Int Joint Conf Neural Netw. 2002;3:2785â€“90.
Scholkopf B, Mika S, Burges C, Knirsch P, Muller K, Ratsch G, Smola A. Input space versus feature space in kernelbased methods. IEEE Trans on Neural Networks. 1999;10:1000â€“17.
Phienthrakul T, Kijsirikul B. Evolutionary strategies for multiscale radial basis function kernels in support vector machines. Conf Genet Evol Comput. 2005;14(7):905â€“11.
Shipp MA, Ross KN, Tamayo P, Weng AP, Kutok JL, Aguiar RC, et al. Diffuse large bcell lymphoma outcome prediction by geneexpression profiling and supervised machine learning. Nat. Med. 2002;8(1):68â€“74.
Rajkumar T, Sinha BN. Studies on activity of various extracts of Albiziaamara against drug induced gastric ulcers. Pharmacognosy J. 2011;3(25):73â€“7.
Yuan J, Yue H, Zhang M, Luo J. Transcriptional profiling analysis and functional prediction of long noncoding RNAs in cancer. Oncotarget. 2016;7(7):8131â€“42.
Evgeniou TK. Learning with Kernel Machine Architectures. Ph.D. Dissertation. Cambridge: Massachusetts Institute of Technology; 2000. AAI0801924.
Frigui H, Nasraoui O. Simultaneous clustering and attribute discrimination. Proc Fuzzieee. 2000;1:158â€“63.
Merico D, Isserlin R, Bader GD. Visualizing geneset enrichment results using the Cytoscape plugin enrichment map. Methods Mol Biol. 2011;781:257â€“77.
Acknowledgments
This work was partly supported by the Shandong Natural Science Foundation (ZR2015AM017) and the National Natural Science Foundation of China (Nos. 61877064). Matthias Dehmer thanks the Austrian Science Funds for supporting this work (project P 30031).
Funding
This work was supported by The National Natural Science Foundation of China (Nos. 61877064), Shandong Natural Science Foundation (ZR2015AM017) and Austrian Science Funds (project P26142).
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Author information
Authors and Affiliations
Contributions
SL and YZ proposed RBFkernels. SL was a major contributor in programming. CX worked for the biological significance part of the manuscript. JL, BY, XL and MD jointly improved methods and applications. All authors contributed to analyzing gene expression data and writing manuscript, and approved the final manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional files
Additional file1:
Existing gene selection methods: a brief introduction. (DOCX 18 kb)
Additional file 2:
SVM and KNN classifiers. (DOCX 18 kb)
Additional file 3:
Dataset descriptions. (DOCX 16 kb)
Additional file 4:
Further comparison for other datasets. Table S1. Average performance in KNN and SVM classifiers of DKBCGS and KBCGS (two classification). Table S2. Average performance in KNN and SVM classifiers of our method and KBCGS (multiclassification). Table S3. Performance of gene feature selection methods with KNN classifier (high) and SVM classifier (low) in twoclass datasets. (DOCX 27 kb)
Additional file 5:
top50 gene expression. Figure S1. First fifty topranked gene expression level by different methods. The horizontal axis is the number of characteristic genes, the vertical axis is the gene expression level, and the black line represents the mean gene expression difference between the normal sample and the cancer sample. (PNG 596 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Liu, S., Xu, C., Zhang, Y. et al. Feature selection of gene expression data for Cancer classification using double RBFkernels. BMC Bioinformatics 19, 396 (2018). https://doi.org/10.1186/s1285901824002
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1285901824002
Keywords
 Clustering
 Gene expression
 Cancer classification
 Feature selection
 Data mining