Skip to main content
  • Research article
  • Open access
  • Published:

Adaptive diffusion kernel learning from biological networks for protein function prediction

Abstract

Background

Machine-learning tools have gained considerable attention during the last few years for analyzing biological networks for protein function prediction. Kernel methods are suitable for learning from graph-based data such as biological networks, as they only require the abstraction of the similarities between objects into the kernel matrix. One key issue in kernel methods is the selection of a good kernel function. Diffusion kernels, the discretization of the familiar Gaussian kernel of Euclidean space, are commonly used for graph-based data.

Results

In this paper, we address the issue of learning an optimal diffusion kernel, in the form of a convex combination of a set of pre-specified kernels constructed from biological networks, for protein function prediction. Most prior work on this kernel learning task focus on variants of the loss function based on Support Vector Machines (SVM). Their extensions to other loss functions such as the one based on Kullback-Leibler (KL) divergence, which is more suitable for mining biological networks, lead to expensive optimization problems. By exploiting the special structure of the diffusion kernel, we show that this KL divergence based kernel learning problem can be formulated as a simple optimization problem, which can then be solved efficiently. It is further extended to the multi-task case where we predict multiple functions of a protein simultaneously. We evaluate the efficiency and effectiveness of the proposed algorithms using two benchmark data sets.

Conclusion

Results show that the performance of linearly combined diffusion kernel is better than every single candidate diffusion kernel. When the number of tasks is large, the algorithms based on multiple tasks are favored due to their competitive recognition performance and small computational costs.

Background

Many types of genomic data can be represented as a graph (network), where the nodes represent genes or proteins, and edges may represent similarities between protein sequences, edges in a metabolic pathway, and physical interactions between proteins [1]. Machine learning tools have been commonly used to analyze biological networks for knowledge discovery and pattern analysis [2]. In this paper, we focus on learning from biological networks for protein function prediction. This problem has been studied extensively by using computational approaches recently [1]. Neighborhood-based methods [3, 4] assign functions to proteins based on the most frequent functions within a neighborhood of the protein and they differ mainly in how the "neighborhood" of a protein is defined. More sophisticated prediction functions have been exploited in [5, 6]. Methods based on network diffusion [7, 8] view the protein network as a flow network and functions of proteins are diffused from annotated proteins to their neighbors in various ways. Other approaches for protein function annotation from biological networks include the graph-cut-based approaches [9, 10] and those derived from the kernel methods [1113].

Kernel methods are versatile tools for learning from graph-based data, as they only require the characterization of similarities between objects by the use of kernel trick [2, 14]. Diffusion kernels [15], which can be considered as the discretization of the well-known Gaussian kernel of Euclidean space, are commonly used for graph-based data. In kernel methods, the information on the data is conveyed only in the kernel function, which uniquely determines the mapping of the original inputs onto a feature space. Thus, one of the central issues in kernel methods is the selection of a good kernel function for a specific problem at hand. A recent trend in kernel learning (selection) is to formulate it as convex programs, which lead to a globally optimal solution [16]. The idea of learning a linear combination of pre-specified kernels for Support Vector Machines (SVM) was originally proposed in [17] where this problem was formulated as semidefinite programs (SDP) and Quadratically Constrained Quadratic Programs (QCQP). In general, approaches based on learning a convex combination of kernels offer the additional advantage of facilitating heterogeneous data integration from different sources [18].

The objective functions for kernel learning used in [17] are performance measures for hard margin SVM, 1-norm soft margin SVM, and 2-norm soft margin SVM. An alternative criterion for kernel matrix learning is the Kullback-Leibler (KL) divergence [19] between the two zero-mean Gaussian distributions defined by the input and output kernel matrices [20]. One particularly appealing feature of the KL divergence criterion is that unlabeled (test) data can be integrated naturally into the training process, thereby improving generalizations. The formulations in [17] also use unlabeled data, but in a weak form by enforcing the trace magnitude of the kernel matrix including both training and test data in the constraint. Direct incorporation of unlabeled data by the formulations in [17] through the KL divergence criterion involves a matrix determinant term. The resulting formulation is a so-called maximum-determinant problem [21], which is a general framework that contains semidefinite programming (SDP) [16] as a special case. Although its theoretical soundness, experiences with semidefinite programming indicate that it is computationally expensive and thus can not be scaled to large-scale problems. The maximum-determinant problem is a more general framework than SDP and the path-following algorithms used to solve it is more expensive.

Diffusion kernels [15] capture the long-range relationships between vertices of graphs and are state-of-the-art for building kernels for graphs. In this paper, we focus on learning diffusion kernels constructed from biological networks, using the KL divergence criterion. In particular, we show that when the KL divergence criterion is used to optimize a convex combination of diffusion kernels with different parameters, the resulting optimization problem does not involve the matrix determinant term and thus can be solved by gradient descent methods. Previous studies [22, 23] have shown that the removal of the matrix-determinant term in the KL divergence criterion has limited effect on its performance. When this modified criterion is used to learn a linear combination of diffusion kernels, the resulting optimization problem is convex and thus solutions by gradient descent methods are guaranteed to be globally optimal. A protein typically performs multiple functions. Most existing approaches formulate a separate task for each of the functions and they are learned independently. They decouple the functions of proteins and potentially compromise the performance as the functions of proteins are usually related. We show that our single-task kernel learning formulation based on the KL divergence criterion can be extended to the multi-task case by enforcing all tasks to share a common kernel. The resulting formulation leads to a single optimization problem, which learns multiple functions of proteins simultaneously. Experimental results show that this multiple-task kernel learning in a joint optimization framework keeps competitive prediction performance, while its computational cost is similar to that for a single task, thus dramatically reducing the time complexity.

Methods

We study the problem of protein function prediction from biological networks, which are represented as graphs. For a graph G MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ , the vertices represent proteins and edges characterize the relationship between proteins. In the following discussion, the vertex and edge sets are denoted as V and E, respectively. The total number of proteins in the network is n = |V|. The adjacency matrix A is used to denote the similarity between vertices where Ai,jdescribes the similarity between vertices v i and v j . The functions of some proteins in the network are already known and the goal of protein function prediction is to infer the functions of unannotated proteins based on the functions of annotated proteins and the network topology. In particular, for a graph G MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ = (V, E), the vertices in V can be partitioned into a training set and a test set. The functions of proteins in the training set are already known while those of proteins in the test set are unknown. Each edge in E reflects the local similarities between its ending vertices. The learning problem is to predict the functions of proteins in the test set based on the label information of training set and the topology of the graph.

Background and Related Work

Kernel methods are particularly suitable for learning from graph-based data, as they only require the similarities between proteins to be encoded in the kernel matrix. In kernel methods, a symmetric function κ : X × X R MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbiqaaabccqaH6oWAcqGG6aGot0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFxepwcqGHxdaTcqWFxepwcqGHsgIRcqWGsbGuaaa@4156@ , where X MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbwvMCKfMBHbqedmvETj2BSbqee0evGueE0jxyaibaieYdOi=BI8qipeYdI8qiW7rqqrFfpeea0xe9LqFf0xc9q8qqaqFn0dXdHiVcFbIOFHK8Feei0lXdar=Jb9qqFfeaYRXxe9vr0=vr0=LqpWqaaeaabiGaciaacaqabeaabeqacmaaaOqaamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=Dr8ybaa@38CE@ denotes the input space, is called a kernel function if it satisfies the Mercer's condition [14]. When used for a finite number of samples in practice, this condition can be stated as follows: for any x1, ...,x n X MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbwvMCKfMBHbqedmvETj2BSbqee0evGueE0jxyaibaieYdOi=BI8qipeYdI8qiW7rqqrFfpeea0xe9LqFf0xc9q8qqaqFn0dXdHiVcFbIOFHK8Feei0lXdar=Jb9qqFfeaYRXxe9vr0=vr0=LqpWqaaeaabiGaciaacaqabeaabeqacmaaaOqaamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=Dr8ybaa@38CE@ the Gram matrix K n × n, defined by K ij = κ(x i , x j ) is positive semidefinite. Any kernel function κ implicitly maps the input set X MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbwvMCKfMBHbqedmvETj2BSbqee0evGueE0jxyaibaieYdOi=BI8qipeYdI8qiW7rqqrFfpeea0xe9LqFf0xc9q8qqaqFn0dXdHiVcFbIOFHK8Feei0lXdar=Jb9qqFfeaYRXxe9vr0=vr0=LqpWqaaeaabiGaciaacaqabeaabeqacmaaaOqaamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=Dr8ybaa@38CE@ to a high-dimensional (possibly infinite) Hilbert space κ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae83cHG0aaSbaaSqaaiabeQ7aRbqabaaaaa@3869@ equipped with the inner product ( , ) κ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaeWaaeaacqGHflY1cqGGSaalcqGHflY1aiaawIcacaGLPaaadaWgaaWcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae83cHG0aaSbaaWqaaiabeQ7aRbqabaaaleqaaaaa@3F9E@ through a mapping φ κ : X κ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqOXdy2aaSbaaSqaaiabeQ7aRbqabaGccqGG6aGot0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFxepwcqGHsgIRcqWFlecsdaWgaaWcbaGaeqOUdSgabeaaaaa@40D8@

κ ( x , z ) = ( φ κ ( x ) , φ κ ( z ) ) κ . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqOUdSMaeiikaGIaemiEaGNaeiilaWIaemOEaONaeiykaKIaeyypa0JaeiikaGIaeqOXdy2aaSbaaSqaaiabeQ7aRbqabaGccqGGOaakcqWG4baEcqGGPaqkcqGGSaalcqaHgpGzdaWgaaWcbaGaeqOUdSgabeaakiabcIcaOiabdQha6jabcMcaPiabcMcaPmaaBaaaleaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFlecsdaWgaaadbaGaeqOUdSgabeaaaSqabaGccqGGUaGlaaa@524B@
(1)

The adjacency matrix A can't be directly used as a kernel matrix. First, the adjacency matrix contains the local similarity information only, which may not be effective for function prediction. Secondly, the adjacency matrix may not even be positive semidefinite. To derive a kernel matrix from the adjacency matrix, the idea of random walk and network diffusion has been used. The basic idea is to compute the global similarity between vertices v i and v j as the probability of reaching v j at some time point T when the random walker starts from v i . This idea is justified at least to some extent by observing that the random walker tends to meander around its origin as there is a larger number of paths of length |T| to its neighbors than to remote vertices [2].

To avoid some potential problems such as the choice of value for T and assurance of positive semidefiniteness for the kernel matrix, a random walk with an infinite number of infinitesimally small steps is used instead. It can be formally described as:

K = lim s ( I + β L s ) s = e β L , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4saSKaeyypa0ZaaCbeaeaacyGGSbaBcqGGPbqAcqGGTbqBaSqaaiabdohaZjabgkziUkabg6HiLcqabaGcdaqadaqaaiabdMeajjabgUcaRKqbaoaalaaabaGaeqOSdiMaemitaWeabaGaem4CamhaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacqWGZbWCaaGccqGH9aqpcqWGLbqzdaahaaWcbeqaaiabek7aIjabdYeambaakiabcYcaSaaa@47AD@
(2)

where β is a parameter for controlling the extent of diffusion and L n × nis the graph Laplacian matrix defined asL = diag(Ae) - A,

where A is the adjacency matrix, e is the vector of all ones, and diag(Ae) is a diagonal matrix with the diagonal entries being the corresponding row summation of the matrix A. It turns out that for any symmetric matrix L, eβLis always positive definite and thus can be used as a kernel matrix. The diffusion effect of such kernel can be explicitly seen when it is expanded as [2]:

e β L = I + β L + β 2 2 L 2 + β 3 6 L 3 + , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyzau2aaWbaaSqabeaacqaHYoGycqWGmbataaGccqGH9aqpcqWGjbqscqGHRaWkcqaHYoGycqWGmbatcqGHRaWkjuaGdaWcaaqaaiabek7aInaaCaaabeqaaiabikdaYaaaaeaacqaIYaGmaaGccqWGmbatdaahaaWcbeqaaiabikdaYaaakiabgUcaRKqbaoaalaaabaGaeqOSdi2aaWbaaeqabaGaeG4mamdaaaqaaiabiAda2aaakiabdYeamnaaCaaaleqabaGaeG4mamdaaOGaey4kaSIaeS47IWKaeiilaWcaaa@48E6@
(4)

where the local information encoded in L is continuously diffused by repeated multiplications. The parameter β in the diffusion kernel controls the extent of diffusion and it has a similar effect as the scaling parameter in Gaussian kernels. If the β is too small, the local information can not be diffused effectively, resulting in a kernel matrix that only captures local similarities. On the other hand, if it is too large, the neighborhood information will be lost. Furthermore, the optimal value for β is problem and data-dependent. Thus it is highly desirable to tune the β value adaptively from the data.

We approach the kernel tuning problem by learning an optimal kernel as a linear combination of pre-specified diffusion kernels constructed with different values of β. This is motivated from the work in [17] where the optimal kernel for SVM, in the form of a linear combination of pre-specified kernels, is learned based on the large margin criteria. In particular, the generalized performance measure based on 1-norm soft margin SVM used in [17] is

ω S 1 ( K ) = max α : C α 0 , α T y = 0 { 2 α T e α T G ( K ) α } , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqyYdC3aaSbaaSqaaiabdofatjabigdaXaqabaGccqGGOaakcqWGlbWscqGGPaqkcqGH9aqpdaWfqaqaaiGbc2gaTjabcggaHjabcIha4bWcbaGaeqySdeMaeiOoaOJaem4qamKaeyyzImRaeqySdeMaeyyzImRaeGimaaJaeiilaWIaeqySde2aaWbaaWqabeaacqWGubavaaWccqWG5bqEcqGH9aqpcqaIWaamaeqaaOGaei4EaSNaeGOmaiJaeqySde2aaWbaaSqabeaacqWGubavaaGccqWGLbqzcqGHsislcqaHXoqydaahaaWcbeqaaiabdsfaubaakiabdEeahjabcIcaOiabdUealjabcMcaPiabeg7aHjabc2ha9jabcYcaSaaa@5C47@
(5)

where C > 0 is the regularization parameter in SVM, e is the vector of all ones, G(K) is defined by G ij (K) = k(x i , x j )y i y j , and the i-th entry of y denoted as y i is the class label (1 or -1) of the i-th data point x i . Lanckriet et al. [17] showed that when the optimal kernel is restricted to the linear combination of the given p kernels K1, ..., K p , the kernel learning problem can be formulated as a semidefinite program. Furthermore, when the coefficients of the linear combination are constrained to be non-negative, the kernel learning problem can be formulated as a Quadratically Constrained Quadratic Program [16]. As was shown in [20], an alternative performance measure is the KL divergence between the two zero-mean Gaussian distributions associated with the input and output kernel matrices. We show that when this KL divergence criterion is used to learn a linear combination of diffusion kernels constructed with different values of β, the resulting optimization problem can be solved efficiently. We further show that it can be extended to the multiple-task case. Such integration of multiple tasks into one optimization problem can potentially exploit the complementary information among different tasks.

Diffusion Kernel Learning: The Single-Task Case

We focus on learning an optimal kernel for a single task, which will then be extended to the multi-task case. The underlying idea is that the Laplacian matrix L, defined in Eq. (3), contains the connectivity information of all vertices in the graph. By adaptively tuning the kernel constructed from L on the training vertices, the entries corresponding to test vertices are expected to be tuned in some optimal way as well. To restrict the search space and improve the generalization ability, we focus on learning an optimal kernel as a linear combination of a set of diffusion kernels constructed with different values of β, indicating different extents of diffusion. In particular, we choose a sequence of values for β as β1, ...,β p , and the corresponding diffusion kernels can be constructed as

K i = e β i L , i = 1 , , p . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabdUealnaaBaaaleaacqWGPbqAaeqaaOGaeyypa0Jaemyzau2aaWbaaSqabeaacqaHYoGydaWgaaadbaGaemyAaKgabeaaliabdYeambaakiabcYcaSaqaaiabdMgaPjabg2da9iabigdaXiabcYcaSiabl+UimjabcYcaSiabdchaWjabc6caUaaaaaa@3FF1@
(6)

We may assume that the kernels defined in Eq. (6) reflect our (weak) prior knowledge about the problem. The goal is to integrate the tuning of the coefficients into the learning process and the algorithm can adaptively select an optimal linear combination of the given kernels. Note that it is numerically favorable to normalize the kernels though this does not affect the results theoretically [14]. We normalize the kernels as follows:

K ˜ i = e β i L trace ( e β i L ) , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabg2da9KqbaoaalaaabaGaemyzau2aaWbaaeqabaGaeqOSdi2aaSbaaeaacqWGPbqAaeqaaiabdYeambaaaeaacqqG0baDcqqGYbGCcqqGHbqycqqGJbWycqqGLbqzcqGGOaakcqWGLbqzdaahaaqabeaacqaHYoGydaWgaaqaaiabdMgaPbqabaGaemitaWeaaiabcMcaPaaakiabcYcaSaaa@4549@
(7)

and the optimal kernel can be represented as

K o p t = i = 1 p α i K ˜ i = i = 1 p α i e β i L t r a c e ( e β i L ) , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4saS0aaSbaaSqaaiabd+gaVjabdchaWjabdsha0bqabaGccqGH9aqpdaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaaaeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoakiabg2da9maaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaqcfa4aaSaaaeaacqWGLbqzdaahaaqabeaacqaHYoGydaWgaaqaaiabdMgaPbqabaGaemitaWeaaaqaaiabdsha0jabdkhaYjabdggaHjabdogaJjabdwgaLjabcIcaOiabdwgaLnaaCaaabeqaaiabek7aInaaBaaabaGaemyAaKgabeaacqWGmbataaGaeiykaKcaaOGaeiilaWcaleaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaaa@6032@
(8)

for a set of non-negative coefficients { α i } i = 1 p MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaei4EaSNaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGG9bqFdaqhaaWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaaaaa@36EC@

Kullback-Leibler Divergence Formulation

Kernel matrices are positive semidefinite and thus can be used as the covariance matrices for Gaussian distributions. It was shown in [20] that the kernel matrix can be learned by minimizing the Kullback-Leibler (KL) divergence between the zero-mean Gaussian distributions associated with the input and output kernel matrices. In this paper, we focus on learning the optimal coefficients α i from the data automatically based on minimizing this KL divergence criterion. As described in [20], the KL divergence between the zero-mean Gaussian distributions defined by the input kernel K x and output kernel K y can be expressed as

KL ( N y | N x ) = 1 2 trace ( K y K x 1 ) + 1 2 log | K x | 1 2 log | K y | n 2 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaee4saSKaeeitaWKaeiikaGIaemOta40aaSbaaSqaaiabdMha5bqabaGccqGG8baFcqWGobGtdaWgaaWcbaGaemiEaGhabeaakiabcMcaPiabg2da9KqbaoaalaaabaGaeGymaedabaGaeGOmaidaaOGaeeiDaqNaeeOCaiNaeeyyaeMaee4yamMaeeyzauMaeiikaGIaem4saS0aaSbaaSqaaiabdMha5bqabaGccqWGlbWsdaqhaaWcbaGaemiEaGhabaGaeyOeI0IaeGymaedaaOGaeiykaKIaey4kaSscfa4aaSaaaeaacqaIXaqmaeaacqaIYaGmaaGccyGGSbaBcqGGVbWBcqGGNbWzcqGG8baFcqWGlbWsdaWgaaWcbaGaemiEaGhabeaakiabcYha8jabgkHiTKqbaoaalaaabaGaeGymaedabaGaeGOmaidaaOGagiiBaWMaei4Ba8Maei4zaCMaeiiFaWNaem4saS0aaSbaaSqaaiabdMha5bqabaGccqGG8baFcqGHsisljuaGdaWcaaqaaiabd6gaUbqaaiabikdaYaaakiabcYcaSaaa@6A43@
(9)

where |·| denotes the matrix determinant, N x and N y denote the zero-mean Gaussian distributions associated with K x and K y , respectively, and n is the number of samples. When the output kernel K y is defined as K y = yyT, the KL divergence in Eq. (9) can be expressed as

KL ( N y | N x ) = 1 2 y T K x 1 y + 1 2 log | K x | + const , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaee4saSKaeeitaWKaeiikaGIaemOta40aaSbaaSqaaiabdMha5bqabaGccqGG8baFcqWGobGtdaWgaaWcbaGaemiEaGhabeaakiabcMcaPiabg2da9KqbaoaalaaabaGaeGymaedabaGaeGOmaidaaGqabiab=Lha5naaCaaabeqaaiabdsfaubaakiabdUealnaaDaaaleaacqWG4baEaeaacqGHsislcqaIXaqmaaGccqWF5bqEcqGHRaWkjuaGdaWcaaqaaiabigdaXaqaaiabikdaYaaakiGbcYgaSjabc+gaVjabcEgaNjabcYha8jabdUealnaaBaaaleaacqWG4baEaeqaaOGaeiiFaWNaey4kaSIaee4yamMaee4Ba8MaeeOBa4Maee4CamNaeeiDaqNaeiilaWcaaa@59CD@
(10)

where "const" denotes terms that are independent of K x , and K x is the input kernel matrix, which is defined as a linear combination of the given p kernels as

K x = i = 1 p α i K ˜ i + λ I = i = 1 p α i e β i L trace ( e β i L ) + λ I . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4saS0aaSbaaSqaaiabdIha4bqabaGccqGH9aqpdaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdGccqGH9aqpdaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaKqbaoaalaaabaGaemyzau2aaWbaaeqabaGaeqOSdi2aaSbaaeaacqWGPbqAaeqaaiabdYeambaaaeaacqqG0baDcqqGYbGCcqqGHbqycqqGJbWycqqGLbqzcqGGOaakcqWGLbqzdaahaaqabeaacqaHYoGydaWgaaqaaiabdMgaPbqabaGaemitaWeaaiabcMcaPaaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdGccqGGUaGlaaa@64E5@
(11)

Note that a regularization term, with λ as the regularization parameter, is added to Eq. (11) to deal with the singularity problem of kernel matrices as in [20], and we require i = 1 p α i = 1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaabmaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaakiabg2da9iabigdaXaWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdaaaa@37B7@ as in multiple kernel learning (MKL) [17]. The optimal coefficients α = [α1, ..., α p ]Tare computed by minimizing KL(N y |N x ). By substituting Eq. (11) into Eq. (10), and removing the constant term, we obtain the following optimization problem:

min α { a T ( i = 1 p α i K ˜ i + λ I ) 1 a + log | i = 1 p α i K ˜ i + λ I | } s .t . i = 1 p α i = 1 , α 0 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiqaaaqaamaaxababaGagiyBa0MaeiyAaKMaeiOBa4galeaacqaHXoqyaeqaaOWaaiWaaeaacqWGHbqydaahaaWcbeqaaiabdsfaubaakmaabmaabaWaaabCaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaakiqbdUealzaaiaWaaSbaaSqaaiabdMgaPbqabaGccqGHRaWkcqaH7oaBcqWGjbqsaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aaGccaGLOaGaayzkaaWaaWbaaSqabeaacqGHsislcqaIXaqmaaGccqWGHbqycqGHRaWkcyGGSbaBcqGGVbWBcqGGNbWzdaabdaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccuWGlbWsgaacamaaBaaaleaacqWGPbqAaeqaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOGaay5bSlaawIa7aaGaay5Eaiaaw2haaaqaauaabeqaciaaaeaacqqGZbWCcqqGUaGlcqqG0baDcqqGUaGlaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGaeyypa0JaeGymaeJaeiilaWcaleaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOqaaaqaaiabeg7aHnrr1ngBPrwtHrhAYaqeguuDJXwAKbstHrhAGq1DVbaceaGae8xzIuOaeGimaaJaeiilaWcaaaaaaaa@85CB@
(12)

where α = (α1, ..., α p )T, α 0 denotes that all components of α are non-negative, and the vector a nis the problem-specific target vector, corresponding to the general target in Eq. (9), defined as follows:

a i = { 1 if  v i  is in the positive class , 1 if  v i  is in the negative class , 0 if  v i  is in the test set . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyyae2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpdaGabaqaauaabaqadiaaaeaacqaIXaqmaeaacqqGPbqAcqqGMbGzcqqGGaaicqWG2bGDdaWgaaWcbaGaemyAaKgabeaakiabbccaGiabbMgaPjabbohaZjabbccaGiabbMgaPjabb6gaUjabbccaGiabbsha0jabbIgaOjabbwgaLjabbccaGiabbchaWjabb+gaVjabbohaZjabbMgaPjabbsha0jabbMgaPjabbAha2jabbwgaLjabbccaGiabbogaJjabbYgaSjabbggaHjabbohaZjabbohaZjabcYcaSaqaaiabgkHiTiabigdaXaqaaiabbMgaPjabbAgaMjabbccaGiabdAha2naaBaaaleaacqWGPbqAaeqaaOGaeeiiaaIaeeyAaKMaee4CamNaeeiiaaIaeeyAaKMaeeOBa4MaeeiiaaIaeeiDaqNaeeiAaGMaeeyzauMaeeiiaaIaeeOBa4MaeeyzauMaee4zaCMaeeyyaeMaeeiDaqNaeeyAaKMaeeODayNaeeyzauMaeeiiaaIaee4yamMaeeiBaWMaeeyyaeMaee4CamNaee4CamNaeiilaWcabaGaeGimaadabaGaeeyAaKMaeeOzayMaeeiiaaIaemODay3aaSbaaSqaaiabdMgaPbqabaGccqqGGaaicqqGPbqAcqqGZbWCcqqGGaaicqqGPbqAcqqGUbGBcqqGGaaicqqG0baDcqqGObaAcqqGLbqzcqqGGaaicqqG0baDcqqGLbqzcqqGZbWCcqqG0baDcqqGGaaicqqGZbWCcqqGLbqzcqqG0baDcqGGUaGlaaaacaGL7baaaaa@A0D2@
(13)

Note that we assign the label 0 to vertices in the test set so that it will not bias towards either class. Similar idea has been used in [24] for semi-supervised learning. In multiple kernel learning [17], the sum-to-one constraint on the weights is enforced as in Eq. (12). We present results on both constrained and unconstrained formulations in the experiments. Results show that the constrained formulations achieved better performance than the unconstrained ones.

Recall that the graph Laplacian matrix L is symmetric, so its eigen-decomposition can be expressed asL = PDPT,

whereD = diag(d1, ... ,d n )

is the diagonal matrix of eigenvalues and P n × nis the orthogonal matrix of corresponding eigenvectors. According to the definition of the function of matrices [25], we have

e β i L = P D i P T , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyzau2aaWbaaSqabeaacqaHYoGydaWgaaadbaGaemyAaKgabeaaliabdYeambaakiabg2da9iabdcfaqjabdseaenaaBaaaleaacqWGPbqAaeqaaOGaemiuaa1aaWbaaSqabeaacqWGubavaaGccqGGSaalaaa@3A44@
(15)

where

D i = diag ( e β i d 1 , , e β i d n ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiraq0aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqqGKbazcqqGPbqAcqqGHbqycqqGNbWzcqGGOaakcqWGLbqzdaahaaWcbeqaaiabek7aInaaBaaameaacqWGPbqAaeqaaSGaemizaq2aaSbaaWqaaiabigdaXaqabaaaaOGaeiilaWIaeS47IWKaeiilaWIaemyzau2aaWbaaSqabeaacqaHYoGydaWgaaadbaGaemyAaKgabeaaliabdsgaKnaaBaaameaacqWGUbGBaeqaaaaakiabcMcaPiabc6caUaaa@4A22@
(16)

The main result is summarized in the following theorem:

Theorem 1. Given a set of p diffusion kernels, as defined in Eq. (7), the problem of learning the optimal kernel matrix, in the form of a convex combination of the given p kernel matrices as in Eq. (12), can be formulated as the following optimization problem:

min α j = 1 n ( b j 2 g j + log ( g j ) ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqaHXoqyaeqaaaGcbaWaaabCaeaadaqadaqaaKqbaoaalaaabaGaemOyai2aa0baaeaacqWGQbGAaeaacqaIYaGmaaaabaGaem4zaC2aaSbaaeaacqWGQbGAaeqaaaaakiabgUcaRiGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabdEgaNnaaBaaaleaacqWGQbGAaeqaaOGaeiykaKcacaGLOaGaayzkaaaaleaacqWGQbGAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoaaaaaaa@4B81@
(17)
s u b j e c t t o i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabdohaZjabdwha1jabdkgaIjabdQgaQjabdwgaLjabdogaJjabdsha0jabbccaGiabdsha0jabd+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@467A@
(18)

α ≥ 0,

where b = (b1, ..., b n ) = PTa, g j is the j-th diagonal entry of the diagonal matrix G, defined as

G = i = 1 p α i D i t r a c e ( D i ) + λ I , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4raCKaeyypa0ZaaabCaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaajuaGdaWcaaqaaiabdseaenaaBaaabaGaemyAaKgabeaaaeaacqWG0baDcqWGYbGCcqWGHbqycqWGJbWycqWGLbqzcqGGOaakcqWGebardaWgaaqaaiabdMgaPbqabaGaeiykaKcaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoakiabcYcaSaaa@4B3C@
(20)

and D i is the diagonal matrix defined in Eq.(16).

Proof. The first term in Eq. (12) can be written as:

a T ( i = 1 p α i K ˜ i + λ I ) 1 a = a T ( i = 1 p α i e β i L trace ( e β i L ) + λ I ) 1 a = a T P ( i = 1 p α i D i trace ( e β i L ) + λ I ) 1 P T a = b T ( i = 1 p α i D i trace ( D i ) + λ I ) 1 b = b T G 1 b = j = 1 n b j 2 g j , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabqWaaaaabaGaemyyae2aaWbaaSqabeaacqWGubavaaGcdaqadaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccuWGlbWsgaacamaaBaaaleaacqWGPbqAaeqaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOGaayjkaiaawMcaamaaCaaaleqabaGaeyOeI0IaeGymaedaaOGaemyyaegabaGaeyypa0dabaGaemyyae2aaWbaaSqabeaacqWGubavaaGcdaqadaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaqcfa4aaSaaaeaacqWGLbqzdaahaaqabeaacqaHYoGydaWgaaqaaiabdMgaPbqabaGaemitaWeaaaqaaiabbsha0jabbkhaYjabbggaHjabbogaJjabbwgaLjabcIcaOiabdwgaLnaaCaaabeqaaiabek7aInaaBaaabaGaemyAaKgabeaacqWGmbataaGaeiykaKcaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOGaayjkaiaawMcaamaaCaaaleqabaGaeyOeI0IaeGymaedaaOGaemyyaegabaaabaGaeyypa0dabaGaemyyae2aaWbaaSqabeaacqWGubavaaGccqWGqbaudaqadaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaqcfa4aaSaaaeaacqWGebardaWgaaqaaiabdMgaPbqabaaabaGaeeiDaqNaeeOCaiNaeeyyaeMaee4yamMaeeyzauMaeiikaGIaemyzau2aaWbaaeqabaGaeqOSdi2aaSbaaeaacqWGPbqAaeqaaiabdYeambaacqGGPaqkaaGccqGHRaWkcqaH7oaBcqWGjbqsaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aaGccaGLOaGaayzkaaWaaWbaaSqabeaacqGHsislcqaIXaqmaaGccqWGqbaudaahaaWcbeqaaiabdsfaubaakiabdggaHbqaaaqaaiabg2da9aqaaiabdkgaInaaCaaaleqabaGaemivaqfaaOWaaeWaaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaKqbaoaalaaabaGaemiraq0aaSbaaeaacqWGPbqAaeqaaaqaaiabbsha0jabbkhaYjabbggaHjabbogaJjabbwgaLjabcIcaOiabdseaenaaBaaabaGaemyAaKgabeaacqGGPaqkaaGccqGHRaWkcqaH7oaBcqWGjbqsaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aaGccaGLOaGaayzkaaWaaWbaaSqabeaacqGHsislcqaIXaqmaaGccqWGIbGyaeaaaeaacqGH9aqpaeaacqWGIbGydaahaaWcbeqaaiabdsfaubaakiabdEeahnaaCaaaleqabaGaeyOeI0IaeGymaedaaOGaemOyaiMaeyypa0ZaaabCaeaajuaGdaWcaaqaaiabdkgaInaaDaaabaGaemOAaOgabaGaeGOmaidaaaqaaiabdEgaNnaaBaaabaGaemOAaOgabeaaaaGaeiilaWcaleaacqWGQbGAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoaaaaaaa@D80B@
(21)

where the third equality follows from the property of the trace, that is,

trace ( e β i L ) = trace ( P D i P T ) = trace ( D i ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiDaqNaeeOCaiNaeeyyaeMaee4yamMaeeyzauMaeiikaGIaemyzau2aaWbaaSqabeaacqaHYoGydaWgaaadbaGaemyAaKgabeaaliabdYeambaakiabcMcaPiabg2da9iabbsha0jabbkhaYjabbggaHjabbogaJjabbwgaLjabcIcaOiabdcfaqjabdseaenaaBaaaleaacqWGPbqAaeqaaOGaemiuaa1aaWbaaSqabeaacqWGubavaaGccqGGPaqkcqGH9aqpcqqG0baDcqqGYbGCcqqGHbqycqqGJbWycqqGLbqzcqGGOaakcqWGebardaWgaaWcbaGaemyAaKgabeaakiabcMcaPiabc6caUaaa@5749@

Similarly, the second term in Eq. (12) can be written as:

log | i = 1 p α i K ˜ i + λ I | = log | i = 1 p α i e β i L trace ( e β i L ) + λ I | = log | i = 1 p α i e β i D trace ( e β i D ) + λ I | = log | G | = log ( j = 1 n g j ) = j = 1 n log ( g j ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabuWaaaaabaGagiiBaWMaei4Ba8Maei4zaC2aaqWaaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdaakiaawEa7caGLiWoaaeaacqGH9aqpaeaacyGGSbaBcqGGVbWBcqGGNbWzdaabdaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaqcfa4aaSaaaeaacqWGLbqzdaahaaqabeaacqaHYoGydaWgaaqaaiabdMgaPbqabaGaemitaWeaaaqaaiabbsha0jabbkhaYjabbggaHjabbogaJjabbwgaLjabcIcaOiabdwgaLnaaCaaabeqaaiabek7aInaaBaaabaGaemyAaKgabeaacqWGmbataaGaeiykaKcaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOGaay5bSlaawIa7aaqaaaqaaiabg2da9aqaaiGbcYgaSjabc+gaVjabcEgaNnaaemaabaWaaabCaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaajuaGdaWcaaqaaiabdwgaLnaaCaaabeqaaiabek7aInaaBaaabaGaemyAaKgabeaacqWGebaraaaabaGaeeiDaqNaeeOCaiNaeeyyaeMaee4yamMaeeyzauMaeiikaGIaemyzau2aaWbaaeqabaGaeqOSdi2aaSbaaeaacqWGPbqAaeqaaiabdseaebaacqGGPaqkaaGccqGHRaWkcqaH7oaBcqWGjbqsaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aaGccaGLhWUaayjcSdaabaaabaGaeyypa0dabaGagiiBaWMaei4Ba8Maei4zaC2aaqWaaeaacqWGhbWraiaawEa7caGLiWoaaeaaaeaacqGH9aqpaeaacyGGSbaBcqGGVbWBcqGGNbWzcqGGOaakdaqeWbqaaiabdEgaNnaaBaaaleaacqWGQbGAaeqaaaqaaiabdQgaQjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHpis1aOGaeiykaKcabaaabaGaeyypa0dabaWaaabCaeaacyGGSbaBcqGGVbWBcqGGNbWzcqGGOaakcqWGNbWzdaWgaaWcbaGaemOAaOgabeaakiabcMcaPaWcbaGaemOAaOMaeyypa0JaeGymaedabaGaemOBa4ganiabggHiLdGccqGGUaGlaaaaaa@C4F2@
(22)

By combining the first term in Eq. (21) and the second term in Eq. (22), we prove the theorem.

The formulation in Theorem 1 is a nonlinear optimization problem. It involves a nonlinear objective function with p variables and linear equality and inequality constraints. Due to the presence of the log term in the objective, it is a non-convex problem and a globally optimal solution may not exist. However, our experimental results show that this formulation consistently produces superior performance.

Convex Formulation

The optimization problem in Theorem 1 is not convex. Previous studies [22, 23] indicate that the removal of the log determinant term in the KL divergence criterion in Eq. (12) has a limited effect on the performance. This leads to the following optimization problem:

min α a T ( j = 1 n α i K ˜ i + λ I ) 1 a MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqaHXoqyaeqaaaGcbaGaemyyae2aaWbaaSqabeaacqWGubavaaGcdaqadaqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccuWGlbWsgaacamaaBaaaleaacqWGPbqAaeqaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGQbGAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoaaOGaayjkaiaawMcaamaaCaaaleqabaGaeyOeI0IaeGymaedaaOGaemyyaegaaaaa@4A66@
(23)
subject to i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabbohaZjabbwha1jabbkgaIjabbQgaQjabbwgaLjabbogaJjabbsha0jabbccaGiabbsha0jabb+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@4668@
(24)

α ≥ 0.

Following Theorem 1, we can show that the optimization problem above can be simplified as

min α j = 1 n b j 2 g j MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqaHXoqyaeqaaaGcbaWaaabCaeaajuaGdaWcaaqaaiabdkgaInaaDaaabaGaemOAaOgabaGaeGOmaidaaaqaaiabdEgaNnaaBaaabaGaemOAaOgabeaaaaaaleaacqWGQbGAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoaaaaaaa@4052@
(26)
subject to i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabbohaZjabbwha1jabbkgaIjabbQgaQjabbwgaLjabbogaJjabbsha0jabbccaGiabbsha0jabb+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@4668@

α ≥ 0.

where g j and b are defined as in Theorem 1.

The optimization problem in Eq. (26) is convex and thus a globally optimal solution exists. Numerical experiments indicate that the simple gradient descent algorithm converges very quickly to the optimal solution. Furthermore, the prediction performance of this convex formulation is comparable to that of the formulation proposed in Theorem 1. This convex formulation shares some similarities with the one in [26], where a set of Laplacian matrices derived from multiple networks is combined.

Diffusion Kernel Learning: The Multi-Task Case

It is known that proteins often perform multiple functions, which are typically related. Many existing function prediction approaches decouple multiple functions and formulate each function prediction problem as a separate binary-class classification problem. Such methods do not consider the relationship among the multiple functions of a protein and potentially compromise the overall performance.

We propose to extend our formulation for the single-task case to deal with multiple tasks simultaneously. In particular, we formulate a single optimization problem for the simultaneous prediction of multiple functions for a protein. The joint learning of multiple functions can potentially exploit the relationship among functions and improve the performance. In terms of computational complexity, the proposed joint optimization problem is shown to be comparable to that of the single-task formulation.

A key observation is that when the pre-specified diffusion kernels are computed from the same biological network with different values of β, the graph Laplacian matrices are the same for all tasks. By enforcing all tasks to share a common linear combination of kernels, we obtain the following joint optimization problem:

min α k = 1 t ( a ( k ) ) T ( i = 1 p α i K ˜ i + λ I ) 1 a ( k ) + t log | i = 1 p α i K ˜ i + λ I | MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaamaaxababaGagiyBa0MaeiyAaKMaeiOBa4galeaacqaHXoqyaeqaaaGcbaWaaabCaeaacqGGOaakcqWGHbqydaahaaWcbeqaaiabcIcaOiabdUgaRjabcMcaPaaakiabcMcaPmaaCaaaleqabaGaemivaqfaaOWaaeWaaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdaakiaawIcacaGLPaaadaahaaWcbeqaaiabgkHiTiabigdaXaaakiabdggaHnaaCaaaleqabaGaeiikaGIaem4AaSMaeiykaKcaaOGaey4kaSIaemiDaqNagiiBaWMaei4Ba8Maei4zaC2aaqWaaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdaakiaawEa7caGLiWoaaSqaaiabdUgaRjabg2da9iabigdaXaqaaiabdsha0bqdcqGHris5aaaaaaa@73F8@
(27)
subject to i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabbohaZjabbwha1jabbkgaIjabbQgaQjabbwgaLjabbogaJjabbsha0jabbccaGiabbsha0jabb+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@4668@
(28)

α ≥ 0,

where a(k) nfor i = 1, ..., t is the vector of class labels for the k-th task as in Eq. (13), and t is the number of tasks. Note that all t tasks are related in this joint formulation by enforcing a common kernel matrix for all tasks. The objective function in Eq. (27) uses an equal weight for all tasks. If some tasks are known to be more important than others, a more general objective function with varying weights for different tasks may be used instead. Following Theorem 1, we can simplify the optimization problem in Eq. (27), as summarized in the following theorem:

Theorem 2. Given a set of p diffusion kernels, as defined in Eq. (7), the problem of optimal multi-task kernel learning, in the form of a convex combination of the given p kernels, can be formulated as the following optimization problem:

min α k = 1 t j = 1 n b k 2 ( j ) g j + t j = 1 n log ( g j ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqaHXoqyaeqaaaGcbaWaaabCaeaadaaeWbqaaKqbaoaalaaabaGaemOyai2aa0baaeaacqWGRbWAaeaacqaIYaGmaaGaeiikaGIaemOAaOMaeiykaKcabaGaem4zaC2aaSbaaeaacqWGQbGAaeqaaaaacqGHRaWkcqWG0baDaSqaaiabdQgaQjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaWcbaGaem4AaSMaeyypa0JaeGymaedabaGaemiDaqhaniabggHiLdaaaOWaaabCaeaacyGGSbaBcqGGVbWBcqGGNbWzcqGGOaakcqWGNbWzdaWgaaWcbaGaemOAaOgabeaakiabcMcaPaWcbaGaemOAaOMaeyypa0JaeGymaedabaGaemOBa4ganiabggHiLdaaaa@5C7C@
(30)
s u b j e c t t o i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabdohaZjabdwha1jabdkgaIjabdQgaQjabdwgaLjabdogaJjabdsha0jabbccaGiabdsha0jabd+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@467A@
(31)

α ≥ 0,

where g j is defined as in Theorem 1, b k = PTa(k), a(k)is defined as in Eq. (13) for the k-th task, and t is the total number of tasks.

Proof. The first term in Eq. (27) can be rewritten as

k = 1 t ( a ( k ) T ( i = 1 p α i K ˜ i + λ I ) 1 a ( k ) ) = k = 1 t ( b k T ( i = 1 p α i D i trace ( D i ) + λ I ) 1 b k ) = k = 1 t ( b k T G 1 b k ) = k = 1 t j = 1 n b k 2 ( j ) g j . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiWaaaqaamaaqahabaWaaeWaaeaacqWGHbqydaahaaWcbeqaaiabcIcaOiabdUgaRjabcMcaPmaaCaaameqabaGaemivaqfaaaaakmaabmaabaWaaabCaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaakiqbdUealzaaiaWaaSbaaSqaaiabdMgaPbqabaGccqGHRaWkcqaH7oaBcqWGjbqsaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aaGccaGLOaGaayzkaaWaaWbaaSqabeaacqGHsislcqaIXaqmaaGccqWGHbqydaahaaWcbeqaaiabcIcaOiabdUgaRjabcMcaPaaaaOGaayjkaiaawMcaaaWcbaGaem4AaSMaeyypa0JaeGymaedabaGaemiDaqhaniabggHiLdaakeaacqGH9aqpaeaadaaeWbqaamaabmaabaGaemOyai2aa0baaSqaaiabdUgaRbqaaiabdsfaubaakmaabmaabaWaaabCaeaacqaHXoqydaWgaaWcbaGaemyAaKgabeaajuaGdaWcaaqaaiabdseaenaaBaaabaGaemyAaKgabeaaaeaacqqG0baDcqqGYbGCcqqGHbqycqqGJbWycqqGLbqzcqGGOaakcqWGebardaWgaaqaaiabdMgaPbqabaGaeiykaKcaaOGaey4kaSIaeq4UdWMaemysaKealeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGWbaCa0GaeyyeIuoaaOGaayjkaiaawMcaamaaCaaaleqabaGaeyOeI0IaeGymaedaaOGaemOyai2aaSbaaSqaaiabdUgaRbqabaaakiaawIcacaGLPaaaaSqaaiabdUgaRjabg2da9iabigdaXaqaaiabdsha0bqdcqGHris5aaGcbaaabaGaeyypa0dabaWaaabCaeaacqGGOaakcqWGIbGydaqhaaWcbaGaem4AaSgabaGaemivaqfaaOGaem4raC0aaWbaaSqabeaacqGHsislcqaIXaqmaaGccqWGIbGydaWgaaWcbaGaem4AaSgabeaakiabcMcaPiabg2da9maaqahabaWaaabCaeaajuaGdaWcaaqaaiabdkgaInaaDaaabaGaem4AaSgabaGaeGOmaidaaiabcIcaOiabdQgaQjabcMcaPaqaaiabdEgaNnaaBaaabaGaemOAaOgabeaaaaGccqGGUaGlaSqaaiabdQgaQjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaWcbaGaem4AaSMaeyypa0JaeGymaedabaGaemiDaqhaniabggHiLdaaleaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqWG0baDa0GaeyyeIuoaaaaaaa@B1F7@

Similarly, the second term can be rewritten as

t log | i = 1 p α i K ˜ i + λ I | = t j = 1 n log ( g j ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiDaqNagiiBaWMaei4Ba8Maei4zaC2aaqWaaeaadaaeWbqaaiabeg7aHnaaBaaaleaacqWGPbqAaeqaaOGafm4saSKbaGaadaWgaaWcbaGaemyAaKgabeaakiabgUcaRiabeU7aSjabdMeajbWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemiCaahaniabggHiLdaakiaawEa7caGLiWoacqGH9aqpcqWG0baDdaaeWbqaaiGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabdEgaNnaaBaaaleaacqWGQbGAaeqaaOGaeiykaKcaleaacqWGQbGAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoakiabc6caUaaa@5893@
(33)

The detailed intermediate steps of derivation are the same as those in the proof of Theorem 1 and thus are omitted. By combining these two terms together, we prove the theorem.

The optimization problem in Theorem 2 is not convex. Similar to the single-task case, the log determinant term in Eq. (27) may be removed, which leads to the following convex optimization problem:

min α k = 1 t j = 1 n b k 2 ( j ) g j MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqaHXoqyaeqaaaGcbaWaaabCaeaadaaeWbqaaKqbaoaalaaabaGaemOyai2aa0baaeaacqWGRbWAaeaacqaIYaGmaaGaeiikaGIaemOAaOMaeiykaKcabaGaem4zaC2aaSbaaeaacqWGQbGAaeqaaaaaaSqaaiabdQgaQjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaWcbaGaem4AaSMaeyypa0JaeGymaedabaGaemiDaqhaniabggHiLdaaaaaa@4A6B@
(34)
subject to i = 1 p α i = 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabbohaZjabbwha1jabbkgaIjabbQgaQjabbwgaLjabbogaJjabbsha0jabbccaGiabbsha0jabb+gaVbqaamaaqahabaGaeqySde2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqaIXaqmaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabdchaWbqdcqGHris5aOGaeiilaWcaaaaa@4668@
(35)

α ≥ 0.

Experimental evidences show that this convex optimization problem is comparable to the formulation in Theorem 2 in prediction performance.

Results and Discussion

We evaluate the performance of the proposed formulations on two benchmark data sets, and compare them with relevant methods, including the Neighbor Counting approach [4] and the FS-Weighted Averaging approach [5, 6]. We construct 60 diffusion kernels from each data set using different values for β and the proposed formulations are applied to compute a linear combination of the pre-computed kernels. The performance of the obtained kernel is compared with that of the individual kernel. To see the relative performance of the objective functions, we also use the 1-norm soft margin SVM criterion, proposed in [17], to compute the linear combination of kernels and the results are presented. All of the formulations proposed in this paper are solved using the MATLAB [27] function fmincon which employs the sequential quadratic programming method [28]. The QCQP formulation for optimizing the 1-norm soft margin SVM criterion is solved using the MOSEK [29] software package. After the kernels are computed, they are fed into SVM for classification and the LIBSVM [30] software package is used in the experiments. All of the experiments are performed on a PC with Intel Pentium D 820 2.8G CPU and 2G RAM.

In the following experiments, a total of 60 diffusion kernels are pre-computed and the values of β used are β i = -0.1 × i, for i = 1, ..., 60. In order to investigate the performance of each individual kernel, we use each kernel for the classification and compute the average Receiver Operating Characteristic (ROC) values over all of the tasks. The ROC value produced by the best averaged individual kernel is used as a baseline. It is called rBaseline as all tasks are restricted to use the same kernel. We further relax the requirement that all tasks use the same kernel and compute the sequence of ROC values achieved by the best individual kernel for each of the tasks. This is considered another baseline called uBaseline as the kernel used by each task is unrestricted. Note that the kernel matrices for both rBaseline and uBaseline represent the single best candidate kernel in the ideal case that the labels of test data are known, and their performance is not guaranteed in practice. In contrast, the kernel matrices computed by the proposed formulations are the optimal kernel matrices in the form of linear combination of the given candidate kernel matrices. In order to evaluate the effectiveness of the weights obtained by the proposed formulations, we assign each kernel the same weight and compute the performance of the combined kernel. It is called eBaseline as all kernel matrices have an equal weight.

For convenience of presentation, the formulations proposed in Theorem 1, Eq. (26), Theorem 2, and Eq. (34) are denoted as DKLKL, DKL, mDKLKL, and mDKL, respectively. For DKLKL and mDKLKL, we also propose to remove the constraints in their optimization problems and the resulting formulations are denoted as DKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@32FA@ and mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ , respectively. (See the caption of Table 1 for detailed description.) The method based on optimizing 1-norm soft margin SVM criterion by solving QCQP proposed in [17] is denoted as SM1. The six proposed formulations are summarized in Table 1.

Table 1 Summary of the proposed formulations. DKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@32FA@

Experiments on the Ligand Data Set

The Ligand data set was derived by Vert and Kanehisa [31] from the Ligand database of chemical reactions in biological pathways [32]. It contains a graph reflecting the interactions between proteins and the function information for them. The graph is a yeast biological network in which a path between vertices implies a possible series of reactions catalyzed by proteins along it. The numbers of vertices and edges in this graph are 753 and 7860, respectively. For the functions of proteins, the functional categories of the MIPS Comprehensive Yeast Genome Database (CYGD) [33] are considered as the gold standard. These categories are not mutually exclusive, and each protein may have multiple functions. There are 36 different functions considered for this data set.

Comparison of ROC Values

We use the ROC as the performance measure and the λ value is fixed to 10-6 in the experiments. Our experimental results show that the algorithms are not sensitive to the value of λ, as long as it is neither too large nor too small. Figure 1 plots the number of tasks with ROC value above a threshold for all methods. The average ROC values achieved by all methods are also summarized in Table 2. In order to test statistical significance, we also compute the p-values of Wilcoxon signed test and the results are reported in Table 3. We can observe that mDKL achieves the best performance among all methods. All the proposed formulations except mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ outperform the three baseline methods. This implies that the computed linear combination of kernels can potentially exploit the complementary information in different kernels and thus improve performance. The ROC value achieved by SM1 is lower than those of the three baseline methods, implying that the SVM criterion is less effective for such tasks. Note that the SM1 criterion also uses information from unlabeled data, but in a weak form. The mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ formulation achieves a ROC value lower than the three baseline methods. This shows that the constraints have important normalizing effects and can not be removed. By comparing the relative performance of formulations with and without the log term, we can conclude that removing this term usually does not affect the performance. Another important observation is that mDKL and mDKLKL outperform DKL and DKLKL, implying that constraining the multiple tasks to share a common kernel does not degrade the performance if the kernel used is a linear combination of kernels obtained by the proposed formulations. In contrast, if the kernel used is a single kernel, this restriction will degrade the performance, as illustrated by the relative performance of rBaseline and uBaseline. For the eBaseline method, it can be observed that, except for mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ , all of other proposed formulations outperform it. This illustrates that our formulations can compute an optimal kernel matrix by assigning different weights to the candidate kernel matrices. We can observe from Table 3 that the difference between the performance of the two baseline methods (rBaseline and eBaseline) and that of DKL and mDKL are statistically significant. All diffusion kernel based approaches are competitive with the Neighbor Counting approach [4] and the FS-Weighted Averaging approach [5, 6]. Neighbor Counting and FS-Weighted Averaging use the local information, more specifically the level-1 neighborhood (Neighbor Counting) and both level-1 and level-2 neighborhoods (FS-Weighted Averaging), for the prediction. The experimental results show the effectiveness of capturing the long-range relationships (global information) between proteins in the network in diffusion kernels [15].

Table 2 Mean ROC values and execution time (in seconds) of various methods on the Ligand Data Set.
Table 3 p-values obtained from Wilcoxon signed test comparing DKL and mDKL with other formulations for the Ligand data set.
Figure 1
figure 1

Comparison of ROC values for various algorithms on the Ligand Data Set. The horizontal axis represents the ROC values and the vertical axis is the number of tasks with ROC values above the corresponding horizontal axis value.

Figure 2 plots the average ROC values for the 60 kernels (the maximum mean ROC value is used in rBaseline) and Figure 3 plots the best ROC values for the 36 tasks. We can observe that for tasks 29 and 33, the best ROC values are small. This implies that all the kernels perform poorly for these two tasks. To illustrate the relative performance of the proposed formulations with that of the baseline method graphically, we plot in Figure 4 the ROC values obtained by the proposed formulations with respect to uBaseline using scatter plots. We can observe that there are two points below the 45-degree line in each plot. Those two points correspond to tasks 29 and 33 and they are difficult to classify by all methods. As most points in the plots are above the 45-degree line, we can conclude that the proposed formulations outperform uBasline on most tasks.

Figure 2
figure 2

Mean ROC values over 36 tasks for each kernel on the Ligand Data Set (the kernel with the maximum mean ROC value is used in rBaseline). The horizontal axis denotes the -β values used to build the corresponding kernel and the vertical axis is the mean ROC value.

Figure 3
figure 3

Best ROC values for tasks achieved by the best kernel (uBaseline) on the Ligand Data Set. The horizontal axis represents the tasks and the vertical axis is the corresponding best ROC value.

Figure 4
figure 4

Comparison of the relative performance of the proposed formulations with that of uBaseline on the Ligand Data Set. The horizontal axis represents uBaseline and the vertical axis corresponds to DKL, DKLKL, mDKL, mDKLKL. Each point in the scatter plots corresponds to ROC values produced by the compared methods on the same task.

Comparison of Execution Time

In order to compare the efficiency of various kernel learning methods, we list in Table 2 the execution time of the compared methods. It can be observed that all methods based on multiple tasks are more efficient than their single-task counterparts. In particular, the execution time of mDKL is roughly 1/36 of that of DKL, which is consistent with our theoretical analysis. In general, convex formulations are more efficient than their non-convex original formulations and the optimization problems with the constraints removed take a longer time to converge. By taking the performance into account, the DKL and mDKL may be the best choices in practice.

Stability Test

In order to obtain a robust performance estimate for the various methods, we randomly partition the data set into a training set and a test set ten times and the average ROC values and standard deviations across splittings are reported in Table 4. Compared with the results in Table 2, we can see that the relative performance of each method in these two tables is very similar. In particular, mDKL and mDKLKL achieve the best overall performance. Except for the two unconstrained formulations DKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@32FA@ and mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ , all of other proposed formulations achieve higher ROC values than the three baseline methods. It is worth noting that the performance of uBaseline and rBaseline is obtained by using the labels of both the training and test data and such performance is not guaranteed in practice when only the labels of the training data are used.

Table 4 Average ROC values and the corresponding standard deviations over 11 splittings on the Ligand Data Set. One of the splittings was specified by the contributor of the data and the remaining ten splittings are randomly generated.

Experiments on the von Mering Data Set

The von Mering data set was created by von Mering et al. [34] from protein-protein interactions identified via six different methods. It contains a graph consisting of 2617 vertices (proteins) and 11855 edges. There are 76 different functions (tasks) associated with the proteins in the graph. The performance of different methods is reported in Figure 5. Two baseline methods, rBaseline and uBaseline, constructed exactly the same way as those for the Ligand data set are used and their performance is summarized in Figure 6 and Figure 7, respectively. The value for is again set to 10-6 in the experiments. Figure 8 compares the relative performance of the proposed formulations with that of the uBaseline graphically.

Figure 5
figure 5

Comparison of ROC values for various algorithms on the von Mering Data Set. The horizontal axis represents the ROC values and the vertical axis is the number of tasks with ROC values above the corresponding horizontal axis value.

Figure 6
figure 6

Mean ROC values over 76 tasks for each kernel on the von Mering Data Set. The horizontal axis denotes the -β values used to build the corresponding kernel and the vertical axis is the mean ROC values.

Figure 7
figure 7

Best ROC values for diferent tasks achieved by different kernels on the von Mering Data Set. The horizontal axis represents the tasks and the vertical axis is the corresponding best ROC values.

Figure 8
figure 8

Comparison of the relative performance of the proposed formulations with that of uBaseline on the von Mering Data Set. The horizontal axis represents uBaseline and the vertical axis corresponds to DKL, DKLKL, mDKL, mDKLKL. Each point in the scatter plots corresponds to ROC values produced by the compared methods on the same task.

Comparison of ROC Values

We use the ROC values of each method to compare their relative performance. Similar to Figure 1 for the Ligand data set, Figure 5 plots the change of the number of tasks with ROC value above a certain threshold as the threshold varies for each of the compared method. For ease of comparison, Table 5 also lists the average ROC values achieved by the compared methods. Similarly, the p-values of Wilcoxon signed test for this data set are reported in Table 6. As the SM1 formulation requires excessive storage and computational time for this relatively large data set, we are not able to obtain its result in this experiment. From these results we can observe that mDKL and mDKLKL achieve the best performance. In general, the performance of DKL, DKLKL, mDKL, and mDKLKL is very close. All of the proposed formulations except DKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@32FA@ perform better than the three baseline methods. The difference between DKL and DKLKL as well as the difference between mDKL and mDKLKL is very small, which further confirms that the removal of the log term does not affect the performance of algorithm much. For the formulations with constraints removed, i.e., DKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@32FA@ and mDKL KL u MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeyBa0MaeeiraqKaee4saSKaeeitaW0aa0baaSqaaiabbUealjabbYeambqaaiabbwha1baaaaa@345B@ , their performance is the lowest among the proposed formulations. Similar to the case for the Ligand data set, we conclude that constraining the multiple tasks to share a common kernel does not degrade the performance if the kernel used is a linear combination of kernels obtained by the proposed formulations. In contrast, if the kernel used is a single kernel, this restriction will degrade the performance, as illustrated by the relative performance of rBaseline and uBaseline. In terms of the eBaseline, we can observe from Table 5 that all of our proposed formulations achieve higher ROC values than the eBaseline method, in which all of the kernel matrices are assigned the same weight. We can observe from Table 6 that the difference between the performance of all of the three baselines and that of DKL and mDKL is statistically significant. We can again observe that all diffusion kernel based approaches are competitive with the Neighbor Counting approach and the FS-Weighted Averaging approach.

Table 5 Mean ROC values and execution time (in seconds) of various methods on the von Mering Data Set.
Table 6 p-values obtained from Wilcoxon signed test comparing DKL and mDKL with other formulations for the von Mering data set.

Figure 8 presents the scatter plots of four proposed formulations with respect to uBaseline. It can be observed that most points are above the 45-degree line, which implies that the linear combination of kernels is better than the ideally best individual kernel. In general, the performance of DKLKL, DKL, mDKLKL, mDKL is better than uBaseline. And this is also confirmed by the mean ROC values listed in Table 5.

Comparison of Execution Time

Table 5 also lists the execution time of various kernel learning methods. Similar conclusions can be drawn from this table as to the execution time on the Ligand data set. All methods based on multiple tasks are more efficient than their single-task counterparts. By comparing the results in Table 2 and Table 5 we can also observe that as the number of tasks increases, the time difference between methods based on multiple tasks and those based on single tasks increases too. Thus, the formulations based on multiple tasks are preferred when the number of tasks is large.

Stability Test

Similar to the Ligand data set, we generate ten random splittings of the data into training and test sets and report the average ROC values and standard deviations in Table 7. By comparing with results in Table 5, we can see that the relative performance of each method is similar in both tables. All of the proposed formulations outperform eBaseline.

Table 7 Average ROC values and the corresponding standard deviations over 11 splittings on the von Mering Data Set. One of the splittings was specified by the contributor of the data and the remaining ten splittings are randomly generated.

Conclusion

In this paper, we address the issue of learning an optimal diffusion kernel based on KL divergence criterion for protein function prediction. By exploiting the special structure of the diffusion kernel, we show that this KL divergence based kernel learning problem can be formulated as a simple optimization problem, which can be solved efficiently. We also extend the formulation to the multi-task case where we predict multiple functions of a protein simultaneously.

We have conducted experiments on two benchmark data sets. Our results show that the performance of linearly combined diffusion kernel is better than every single candidate diffusion kernel. Results also show that the removal of the log term in the KL divergence criterion does not degrade its recognition performance, while it leads to a reduced computational cost. When the number of tasks is large, the algorithms based on multiple tasks are favored due to their competitive recognition performance and small computational costs. One possible extension is to incorporate the learning of the regularization parameter in the proposed formulations as in [17]. The difference between the proposed learning framework and those in [17] is that our formulations require that the eigenvectors of the candidate kernel matrices to be the same. Thus the proposed formulations may not be applied for heterogeneous data integration. We plan to apply the proposed algorithms for the analysis of other graph-based biological data.

References

  1. Pandey G, Kumar V, Steinbach M: Computational Approaches for Protein Function Prediction: A Survey. In Tech Rep TR 06–028, Department of Computer Science and Engineering. University of Minnesota, Twin Cities, MN; 2006.

    Google Scholar 

  2. Schölkopf B, K T, JP V: Kernel Methods in Computational Biology. Cambridge, MA: MIT Press; 2004.

    Google Scholar 

  3. Hishigaki H, Nakai K, Ono T, Tanigami A, Takagi T: Assessment of prediction accuracy of protein function from protein-protein interaction data. Yeast 2001, 18: 523–531. 10.1002/yea.706

    Article  CAS  PubMed  Google Scholar 

  4. Schwikowski B, Uetz P, Fields S: A network of protein-protein interactions in yeast. Nature Biotechnology 2000, 18: 1257–1261. 10.1038/82360

    Article  CAS  PubMed  Google Scholar 

  5. Chua HN, Sung WK, Wong L: Exploiting Indirect Neighbours and Topological Weight to Predict Protein Function from Protein-Protein Interactions. Bioinformatics 2006, 22: 1623–1630. 10.1093/bioinformatics/btl145

    Article  CAS  PubMed  Google Scholar 

  6. Chua HN, Sung WK, Wong L: Using Indirect Protein Interactions for the Prediction of Gene Ontology Functions. BMC Bioinformatics 2007, 8: S8. 10.1186/1471-2105-8-S4-S8

    Article  PubMed Central  PubMed  Google Scholar 

  7. Nabieva E, Jim K, Agarwal A, Chazelle B, Singh M: Whole-proteome prediction of protein function via graph-theoretic analysis of interaction maps. Bioinformatics 2005, 21: 302–310. 10.1093/bioinformatics/bti1054

    Article  Google Scholar 

  8. Weston J, Elisseeff A, Zhou D, Leslie CS, Noble WS: Protein ranking: From local to global structure in the protein similarity network. Proc Natl Acad Sci 2004, 101: 6559–6563. 10.1073/pnas.0308067101

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  9. Vazquez A, Flammini A, Maritan A: Global protein function prediction from protein-protein interaction networks. Nature Biotechnology 2003, 21: 697–700. 10.1038/nbt825

    Article  CAS  PubMed  Google Scholar 

  10. Karaoz U, Murali TM, Letovsky S, Zheng Y, Ding C, Cantor CR, Kasif S: Whole-genome annotation by using evidence integration in functional-linkage networks. Proc Natl Acad Sci 2004, 101: 2888–2893. 10.1073/pnas.0307326101

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  11. Ben-Hur A, Noble WS: Kernel methods for predicting protein protein interactions. Bioinformatics 2005, 21(Suppl 1):i38-i46. 10.1093/bioinformatics/bti1016

    Article  CAS  PubMed  Google Scholar 

  12. Roth V, Fischer B: Improved functional prediction of proteins by learning kernel combinations in multilabel settings. BMC Bioinformatics 2007, 8: S12. 10.1186/1471-2105-8-S2-S12

    Article  PubMed Central  PubMed  Google Scholar 

  13. Tsuda K, Noble WS: Learning kernels from biological networks by maximizing entropy. Bioinformatics 2004, 20: 326–333. 10.1093/bioinformatics/bth906

    Article  Google Scholar 

  14. Schölkopf B, Smola AJ: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA: MIT Press; 2002.

    Google Scholar 

  15. Kondor RI, Lafferty JD: Diffusion Kernels on Graphs and Other Discrete Structures. ICML 2002, 315–322.

    Google Scholar 

  16. Boyd S, Vandenberghe L: Convex Optimization. Cambridge: Cambridge University Press; 2004.

    Book  Google Scholar 

  17. Lanckriet G, Cristianini N, Bartlett P, Ghaoui LE, Jordan MI: Learning the Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research 2004, 5: 27–72.

    Google Scholar 

  18. Lanckriet G, Bie TD, Cristianini N, Jordan M, Noble W: A statistical framework for genomic data fusion. Bioinformatics 2004, 20: 2626–2635. 10.1093/bioinformatics/bth294

    Article  CAS  PubMed  Google Scholar 

  19. Kullback S, Leibler RA: On Information and Sufficiency. Annals of Mathematical Statistics 1951, 22: 79–86. 10.1214/aoms/1177729694

    Article  Google Scholar 

  20. Lawrence ND, Sanguinetti G: Matching kernels through Kullabck-Leibler divergence minimisation. In Technical Report CS-04–12, Department of Computer Science. The University of Sheffeld; 2004.

    Google Scholar 

  21. Vandenberghe L, Boyd S, Wu S: Determinant Maximization with Linear Matrix Inequality Constraints. SIAM Journal on Matrix Analysis and Applications 1998, 19: 499–533. 10.1137/S0895479896303430

    Article  Google Scholar 

  22. Smola AJ, Bartlett PL: Sparse greedy Gaussian process regression. NIPS 2001, 619–625.

    Google Scholar 

  23. Smola AJ, Schölkopf B: Sparse greedy matrix approximation for machine learning. ICML 2000, 911–918.

    Google Scholar 

  24. Zhou D, Bousquet O, Lal TN, Weston J, Schölkopf B: Learning with Local and Global Consistency. NIPS 2004, 321–328.

    Google Scholar 

  25. Golub GH, Van Loan CF: Matrix Computations. 3rd edition. Baltimore, MD: The Johns Hopkins University Press; 1996.

    Google Scholar 

  26. Tsuda K, Shin H, Schölkopf B: Fast protein classification with multiple networks. Bioinformatics 2005, 21: 59–65. 10.1093/bioinformatics/bti1110

    Article  Google Scholar 

  27. The Matlab Package[http://www.mathworks.com]

  28. Nocedal J, Wright S: Numerical Optimization. 2nd edition. New York: Springer; 2006.

    Google Scholar 

  29. The MOSEK Package[http://www.mosek.com]

  30. Chang CC, Lin CJ:LIBSVM: a library for support vector machines. 2001. [http://www.csie.ntu.edu.tw/~cjlin/libsvm]

    Google Scholar 

  31. Vert JP, Kanehisa M: Graph-Driven Feature Extraction From Microarray Data Using Diffusion Kernels and Kernel CCA. NIPS 2003, 1425–1432.

    Google Scholar 

  32. The Ligand data set[http://www.genome.ad.jp/ligand/]

  33. The MIPS Comprehensive Yeast Genome Database[http://mips.gsf.de/genre/proj/yeast/]

  34. von Mering C, Krause R, Snel B, Cornell M, Oliver S, Fields S, Bork P: Comparative assessment of large-scale data sets of protein-protein interactions. Nature 2002, 417(6887):399–403. 10.1038/nature750

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This research is sponsored in part by the Arizona State University and by the National Science Foundation under Grant No. IIS-0612069.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jieping Ye.

Additional information

Authors' contributions

LS designed the methodology, implemented programs, and participated in manuscript preparation. SJ derived the KL divergence formulation, and drafted the manuscript. JY originally conceived the project, guided the implementation, and drafted the manuscript. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sun, L., Ji, S. & Ye, J. Adaptive diffusion kernel learning from biological networks for protein function prediction. BMC Bioinformatics 9, 162 (2008). https://doi.org/10.1186/1471-2105-9-162

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-9-162

Keywords