Drug-target interactions prediction using marginalized denoising model on heterogeneous networks

Background Drugs achieve pharmacological functions by acting on target proteins. Identifying interactions between drugs and target proteins is an essential task in old drug repositioning and new drug discovery. To recommend new drug candidates and reposition existing drugs, computational approaches are commonly adopted. Compared with the wet-lab experiments, the computational approaches have lower cost for drug discovery and provides effective guidance in the subsequent experimental verification. How to integrate different types of biological data and handle the sparsity of drug-target interaction data are still great challenges. Results In this paper, we propose a novel drug-target interactions (DTIs) prediction method incorporating marginalized denoising model on heterogeneous networks with association index kernel matrix and latent global association. The experimental results on benchmark datasets and new compiled datasets indicate that compared to other existing methods, our method achieves higher scores of AUC (area under curve of receiver operating characteristic) and larger values of AUPR (area under precision-recall curve). Conclusions The performance improvement in our method depends on the association index kernel matrix and the latent global association. The association index kernel matrix calculates the sharing relationship between drugs and targets. The latent global associations address the false positive issue caused by network link sparsity. Our method can provide a useful approach to recommend new drug candidates and reposition existing drugs.


Background
Identifying drug-target interactions (DTIs) is a critical work in drug discovery and drug repositioning. Although high-throughput screening and other biological assays are becoming available, the experimental methods for DTIs identification remain to be extremely costly. Different computational methods for predicting potential DTIs were proposed in the past decade [1][2][3][4][5].
In 2008, Yamanishi et al. [6] proposed a bipartite network model by integrating the chemical and genomic spaces to predict DTIs for four classes of target proteins, which are enzymes, ion channels (IC), G protein-coupled receptors (GPCR), and nuclear receptors (NR). Based on the four datasets, several methods to improve the accuracy of DTIs prediction were proposed. In early studies, the DTI prediction problem was treated as a binary classification problem. Some classical classifiers such as support vector machines (SVM) and regularized least squares (RLS) were used to predict drugtarget interactions. A supervised bipartite local model (BLM) using SVM classifier was proposed to predict drug and target sets respectively [7]. To solve the problem of selecting negative samples, a semi-supervised learning method called Laplacian Regularized Least Squares (LapRLS) was proposed [8]. To analyze the relevance between the network topological information and DTIs prediction, a Gaussian interaction profile (GIP) kernel was defined to capture the topological information in DTIs network. And a Regularized Least Squares (RLS) classifier was employed with GIP kernel to predict DTIs [9]. The methods mentioned above focus on existing drug-target interaction pairs and mainly deal with the old drug reposition problem.
To predict new drugs or targets, a bipartite local model with neighbor-based interaction profile inferring (BLM-NII) was proposed [10]. A weighted nearest neighbor (WNN) profile and the GIP kernel were incorporated to handle new drug compounds [11]. A robust model against the overfitting problem of traditional statistical methods was proposed base on the Random Forest (RF) method [12]. Matrix factorization (MF) method is a feature extraction method widely used in recommendation system [13]. The MF method was used to identify latent features of drugs and targets to handle new drug discovery problem [14][15][16][17][18][19]. Zheng et al. [15] used collaborative matrix factorization (CMF) to predict potential DTIs. Liu et al. [17] used the logistic matrix factorization and neighborhood information of drugs and targets to predict DTIs. There are also some methods which form the final kernel matrix using the linear combination of two or more kernel matrices [7-9, 15, 17]. Hao et al. [18] combined different kernel matrix with nonlinear kernel diffusion, and employed the diffused kernel matrix with RLS classifier to predict DTIs. Hao et al. [19] integrated logistic matrix factorization and kernel diffusion to improve the accuracy of DTIs prediction. The model based on diffused kernel matrix outperforms the model based on the linearly weighted kernel matrix for DTIs prediction.
To predict more realistic drug-target interactions, some researchers used drug-target binding affinity. Binding affinity indicates the strength of interactions between drugtarget pairs. Binding affinity is usually measured by the dissociation constant (K d ), inhibition constant (K i ), or the half maximal inhibitory concentration (IC50). Kro-necker_rls [9,20] is a method to predict drug-target binding affinity [21]. For more accurate prediction on continuous drug-target binding affinity data, a non-linear method called SimBoost was proposed by using the gradient boosting regression trees as the learning model [22].
With rapidly development of deep learning, some deep learning frameworks have been applied in the field of drug discovery [23][24][25][26][27]. Stacked auto-encoder was used to construct deep representation of drug-target pairs [24]. Hu et al. [25] used convolutional Neural Network (CNN) to predict DTIs. A new compound-protein interaction (CPIs) prediction approach was developed by combining graphical neural network (GNN) for compounds and convolutional neural network (CNN) for proteins [26]. Based on deep neural network (DNN), Tian et al. [27] proposed a method called DL_ CPI to predict large-scale compound-protein interactions. Deep learning method has the advantages in dealing with growing compounds data. But analyzing deep learning models is difficult due to their black-box nature, more effective models are needed to improve the accuracy of DTIs prediction.
From the perspective of networks, the DTIs prediction problem can be treated as a network link prediction problem [28]. Chen et al. [29] developed a model of networkbased random walk with restart on heterogeneous networks (NRWRH) to predict potential DTIs. Lan et al. [30] used the models of random walk with restart, k nearest neighbors (kNN), and heat kernel diffusion to label unknown DTIs to predict potential DTIs. Recently, Chen et al. used marginalized denoising model (MDM) to predict hidden or missing links in a given relational matrix by transforming a network link prediction problem to a matrix denoising problem [31]. The MDM-based method can predict new protein-protein interaction in the PPI network better than the MF-based methods. But the MDM-based predicting method has not been applied to the heterogeneous network such as drug-target interactions.
To further improve the accuracy of DTIs prediction, this paper proposes an integrated method using the marginalized denoising model on heterogeneous networks, association index and kernels fusion. We transform the DTIs prediction problem to a noise reduction problem on heterogeneous networks. The heterogeneous network is constructed by combining drug and target kernel matrices and the existing DTIs network. To construct the kernel matrix, we introduce the association index kernel matrix to measure the sharing interaction relationship between drugs and the sharing interaction relationship between targets. The sharing interaction relationship is derived from the common targets between drugs and the common drugs between targets. Furthermore, we not only use the information of associations of the nearest neighbors to perform DTIs prediction, but also incorporate the global association between drugs and targets to reduce the sparsity of DTIs network and improve prediction accuracy.
The rest of this paper is organized as follows. The experimental results are reported in section 2. The discussion of experimental results is given in section 3. The conclusion is given in section 4. The source of the benchmark dataset selected and new compiled dataset, construction of the matrices of similarity between drugs and similarity between targets, MDM model and our proposed prediction method are described in section 5.

Results
Similar to the previous studies [11,15,17,19], we conducted the experiments by five trials of 10-fold cross-validation (CV). We employed the area under curve of receiver operating characteristic (AUC) and area under precision-recall curve (AUPR) as the evaluation metrics. To valid our prediction method in drug reposition, in completely new drug discovery, and in completely new targets discovery respectively, we conducted the cross-validation under the following three settings: (1) CVP (cross-validation based on the drug-target interaction pairs): Validating for drug reposition. 90% of the drug-target interaction pairs in drug-target interaction network Y were randomly selected as training data, and the left 10% of the drugtarget interaction pairs were selected as testing data. The CVP setting is used to verify the performance of the prediction method in drug reposition. (2) CVD (cross-validation based on the drugs): Validating for new drug in known targets. 90% of rows (drugs) in Y were randomly selected as training data, and the left 10% of rows (drugs) were selected as testing data. The CVD setting is used to verify the performance of the prediction method in new drug discovery. (3) CVT (cross-validation based on the targets): Validating for new target in known drugs. 90% of columns (targets) in Y were randomly selected as training data, and the left 10% of columns (targets) were selected as testing data. The CVT setting is used to verify the performance of the prediction method in new target discovery.
We evaluated our method DTIP_MDHN and three existing DTIs prediction methods BLM-NII [10], RLS-WNN [11], NRLMF [17], and DNILMF [19] on the benchmark datasets and the new dataset1. BLM-NII method integrated BLM model with neighborbased interaction profile to handle the new drugs/targets problem. RLS-WNN is a GIPbased prediction method with a weighted nearest neighbor profile for predicting new drug compounds. NRLMF and DNILMF are two MF-based prediction methods. We implemented algorithm DTIP_MDHN in MATLAB. The experiment was conducted at the high-performance computing center of Guangxi University. 1 We first evaluated our method DTIP_MDHN and other four methods BLM-NII, RLS-WNN, NRLMF and DNILMF in terms of AUC and AUPR on benchmark data. To verify the performance of the prediction methods in drug reposition, we conducted the experiment under CVP setting. The experimental results are shown in Table 1. In addition, to verify the performance of prediction methods in new drug/target discovery, we conducted the experiment under CVR and CVC settings. The experimental results are shown in Tables 2 and 3, respectively. We can see from Tables 1, 2 and 3 that compared with other four methods, our method DTIP_MDHN achieves better results for AUC metric on the benchmark datasets under CVP and CVD settings, and obtains higher scores on IC, GPCR and NR datasets under CVT setting. For AUPR metric, DTIP_MDHN outperforms other four methods on all datasets under CVP and CVD settings, and achieves higher scores on IC, GPCR and NR datasets under CVT setting. The GPCR and NR datasets are sparser than Enzyme and IC datasets, so the prediction accuracy on GPCR and NR datasets is always lower in previous study. In our method DTIP_MDHN, the Jaccard index is introduced to measure the sharing interaction relationship between drugs and targets, the indirect interactions are introduced by global association to solve the data sparsity  The best results in each column are in bold problem. The experimental result indicates that our method can improve the prediction accuracy, and it is more suitable for predicting DTIs on more sparse datasets such as GPCR and NR under CVT setting. Constructing the final kernel matrices of drugs and targets is a key step to predict latent DTIs. To compare the effects of different final kernel matrices on the DTIs prediction results, we evaluated our constructed final kernel matrices KFJD and KFJT with other two final kernel matrices in GIP [9] and DNILMF [19], in terms of AUC and AUPR on benchmark data. In Table 4, we denote the final kernel matrices of drugs and targets constructed from GIP [9] as KGD and KGT respectively, the final kernel matrices of drugs and targets constructed from DNILMF [19] as KFD and KFT respectively. Table 4 shows the scores of AUC and AUPR of DTIP_MDHN using these kernel matrices under CVP setting.
The experimental results in Table 4 indicate that our constructed final kernel matrices of drugs and targets KFJD and KFJT indeed leads to more accurate predictions in our method DTIP_MDHN than the final kernel matrices in GIP [9] and DNILMF [19].
Next, we evaluated our proposed prediction model with other machine learning models, such as supervised learning models SVM and RF, and Matrix Factorization (MF) model. We extracted our constructed final kernel matrices KFJD/KFJT as the features of drugtarget pairs, drug-target interaction matrix Y as the classification labels for supervised learning prediction models. We used BLM [7] as the SVM-based method, DDR [12] as the RF-based method, and DNILMF [19] as the MF-based method. While the scores of The best results in each column are in bold AUPR and AUC were calculated under CVP setting. Table 5 shows the scores of AUC and AUPR for four prediction models. The experimental results in Table 5 indicate that our proposed prediction model achieves higher scores of AUC and AUPR than SVM, RF, and MF models in DTIs prediction.
In addition to the final kernel matrices of drugs and targets, there are two key parameters in DTIP_MDHN. One is the noise value (noise), and another one is the dimension of latent layer (k). We evaluated how the values of noise and k affect the scores of AUC and AUPR for DTIP_MDHN on the benchmark datasets respectively. The noise is set to 0.65, 0.75, 0.85 and 0.95, and k is set to the value in range [10,150] according to the   2 we can see that DTIP_MDHN obtains the highest scores of AUC and AUPR on the four datasets when noise = 0.65. The dimension of latent layer (k) indicates the degree of dimensionality reduction in the Auto-Encode (AE). The key information will lose from original data if k is too small. The non-critical and redundant information still exists if the value of k is too large. In general, the choice of value of k depends on the dimension of different datasets. By analyzing the results in Figs. 1 and 2, we set the value of k according to the number of drugs for different datasets. Table 6 shows the values of k and noise on the benchmark datasets.
To verify the validity of DTIP_MDHN method, we sort the new drug-target interaction pairs predicted by DTIP_MDHN in descending order of the prediction scores and obtain top 5 of the scores for Enzyme, IC, GPCR and NR respectively. If a new drug-target interaction is validated in the current version of KEGG [32], SuperTarget [33], DRUGBANK [34], and ChEMBL [35], the "Validated" item is labeled by "yes"; otherwise it is labeled by "No". Table 7 shows the top 5 of new drug-target interactions predicted by DTIP_MDHN on the benchmark datasets.
As shown in Table 7, the top 5 of new drug-target interactions for Enzyme dataset are validated in current databases. 3 of the top 5 new drug-target interactions for IC and GPCR datasets are validated in current databases respectively. 2 of the top 5 new drug-target interactions for NR dataset are validated in current databases. The statistics for the "Validated" item in Table 10 shows that, the hit rate of prediction for all the four datasets is about 75%. In fact, the NR dataset is the most challenging dataset for DTIs prediction because it is the sparsest dataset among benchmark datasets [6,8,18].
We further analyze the no-validated DTI pairs in NR dataset. From Table 7 we can see that the top one of predicted items in NR dataset is a DTI pair between D00316 (Etretinate) and hsa6096 (RORβ). The study in [36] indicated that several retinoids bind to RORβ (hsa6096) to provide a novel pathway for retinoid action. As Etretinate is an aromatic retinoid, a second-generation retinoid, there is a high probability of interaction between Etretinate and RORβ. For the fifth item in NR dataset, D01115 (Eplerenone) is predicted to interact with a Glucocorticoid receptor (hsa2908). Although the interaction between D01115 and hsa2908 has not been found in the current version of KEGG, DRUGBANK, ChEMBL and SuperTarget, an antagonist activity assay confirms this interaction result in PubChem BioAssay ID: AID 761383 from ChEMBL [37].
The benchmark datasets were generated in 2008. Many new interactions are appended to the current version of the KEGG [32], SuperTarget [33], DrugBank [34], and BRENDA [38] nowadays. To enhance the diversity of experimental dataset and inspect the performance of our proposed method on the new database, we used the new dataset1 from KEGG to perform DTIs prediction. Following the category in KEGG, the  target proteins can be divided into 8 datasets. In addition to the datasets of Enzyme, IC, GPCR and NR, the 4 new datasets are protein kinase (PK), transporter (TR), cell surface molecule and ligand (CSM), cytokine and cytokine receptor (CR). After deleting the redundant and invalid data, we compiled the new datasets with 11,912 known interactions linking 4495 unique drugs and 959 unique targets. We conducted the experiment to evaluate our method DTIP_MDHN and the newest MF-based method DNIL MF. Some drugs may act on two or more different types of targets. For example, Cocaine (D00110) can act on SCN9A (hsa6335) which belongs to Ion channels, and can act on SLC6A2 (hsa6530) which is belongs to Transporters. So, we added a dataset containing all 8 classes of target proteins on KEGG as input in the experiment. This dataset is denoted as "ALL". Table 8 shows the AUC and AUPR scores for two prediction methods on the new datasets1 under CVP setting, in which DNILMF used the optimized parameters (num-Latent = 90, c = 20, thisAlpha = 0.7, λ u = 10, λ v = 10, K = 2) for Enzyme and "ALL" datasets, used the parameters (numLatent = 90, c = 6, thisAlpha = 0.4, λ u = 2, λ v = 2, K = 2) for the other datasets, and DTIP_MDHN used the parameters noise = 0.65 and the value of k in Table 6.
From Table 8, we can see that for the new dataset1 of Enzyme, IC, GPCR, and NR, the scores of AUC and AUPR computed by DNLMF and DTIP_MDHN are basically the same as that for the benchmark datasets. For the datasets of protein kinase, transporter, cell surface molecule and ligand, cytokine and cytokine receptor, the scores of AUPR and AUC are mostly about 0.9 and 0.99 respectively. For the "ALL" dataset, the score of AUPR is about 0.97 for DTIP_MDHN, and only about 0.63 for DNILMF. The "ALL" dataset is much sparser than any single dataset because the "ALL" dataset treats the interaction information on the above 8 datasets as a whole one. In DNILMF method, only the local neighborhood information is used to measure the similarity between drugs and targets. In DTIP_MDHN method, the global association is exploited to represent the indirect association relationship between drugs and targets, the influence of link sparsity is reduced, and the prediction accuracy is improved.
To verify the availability of our proposed method in binding affinity prediction, we evaluated our method DTIP_MDHN and two existing binding affinity prediction methods Kronecker_rls [9,20] and SimBoost [22] on the David and Metz datasets, in terms of AUPR, AUC and concordance index (CI) [21,39]. Kronecker_rls [9,20] is a DTIs prediction method that first be used to predict binding affinity [21]. SimBoost is a supervised learning model and selects the gradient boosting regression trees to predict continuous binding affinity.
To measure with AUPR and AUC, the quantitative datasets were binarized by using relatively stringent cut-off thresholds (K d < 30 nM and K i < 28.18 nM) [21]. It means that if K d < 30 nM or K i < 28.18 nM, the affinity value is set to 1, otherwise, the affinity value is set to 0. To measure with the continuous values of K d and K i , the concordance index (CI) was used as an evaluation metric [21,39].
For the continuous values of K d and K i , we use Pearson Correlation Coefficient (PCC) instead of the Jaccard index to calculate the association index kernel because Jaccard kernel matrix works well on binary interaction matrix, while PCC is originally developed to measure the relationship between two continuous variables [40] and can be calculated with the function corrcoef() in MATLAB. The values of CI calculated by  in the following Tables 9, 10, 11, 12, 13 and 14. Table 9 shows the scores of AUPR, AUC, and CI for three prediction methods Kro-necker_rls [21], SimBoost [22], and our method DTIP_MDHN. Tables 10 and 11 shows the scores of AUPR, AUC, and CI for Kronecker_rls and our method DTIP_MDHN under CVD and CVT setting, respectively. Because the supervised learning methods do not distinguish CVP, CVD and CVT setting, the comparison with SimBoost method was not included in Tables 10 and 11. These results in Tables 9, 10 and 11 were based on the protein target normalized SW sequence similarity and compound drug 2dimensional structural similarity. We used the default parameters noise = 0.65 and k = 60 for AUPR, AUC and CI, and used parameters noise = 0.95 and k = 60 for CI_PCC in our method DTIP_MDHN.
From Tables 9, 10 and 11, we can see that our method DTIP_MDHN has higher scores of AUPR and AUC than Kronecker_rls and SimBoost. We can also see that Sim-Boost has higher score of CI than DTIP_MDHN and Kron_rls on David dataset, and Kron_rls has higher score of CI than DTIP_MDHN and SimBooston on Metz dataset under CVP setting in Table 9. Kron_rls has higher score of CI than DTIP_MDHN on David dataset, but DTIP_MDHN has higher score of CI than Kron_rls on Metz dataset under CVD and CVT setting in Tables 10 and 11 respectively. For DTIP_MDHN, the scores of CI_PCC are higher than that of CI on David and Metz datasets under CVP and CVT setting and on David dataset under CVD setting. This illustrates that PCC kernel matrix achieves higher accuracy than Jaccard kernel matrix in predicting drugtarget binding affinity.
Next, we evaluate the prediction accuracy of DTIP_MDHN and Kronecker_rls with different chemical structure and sequence similarity kernels on David dataset in terms of AUPR, AUC and CI under different CV setting. We denote the two-dimensional Tanimoto coefficients similarity kernel matrix as 2D, denote three-dimensional Tanimoto coefficients similarity kernel matrix as 3D, and denote the extended-connectivity fingerprint ECFP4 similarity kernel matrix as ECFP4 in Tables 12, 13 and 14. The value of CI calculated by DTIP_MDHN with PCC kernel labeled as CI_PCC. Kronecker_rls uses the default parameters and DTIP_MDHN uses noise = 0.65 and k = 60 for AUPR, AUC, and CI, and DTIP_MDHN uses noise = 0.95 and k = 60 for CI_PCC.
From the experimental results shown in Tables 12, 13 and 14, we can see that in terms of AUPR and AUC, our method DTIP_MDHN outperforms over Kronecker_rls method for all three similarity kernels under all three CV setting. In terms of CI, DTIP_MDHN with PCC kernel achieves better prediction accuracy than DTIP_MDHN  [41]. STITCH database contains a comprehensive resource for both known and predicted interactions of compounds and proteins. In order to ensure the accuracy of CPIs data, we extracted the CPIs interactions with combined scores greater than 900 from CPIs of Homo sapiens interactions. It means the CPIs interactions that we used in our experiment have the interaction probability greater than 90%. The experimental data contains 13,286 drugs, 5313 targets, and 116,199 interactions. The detailed compound protein interaction information can be referred to Additional file 1. Table 15 shows the values of AUC and AUPR for DTIP_MDHN, BLM, and GNN&CNN on STITCH dataset under CVP setting.
From Table 15, we can see that DTIP_MDHN obtains higher score of AUC than BLM and GNN&CNN in large-scale CPIs prediction, which indicates that our method DTIP_MDHN can identify true negatives from the testing data more accurate than BLM and GNN&CNN methods. On the other hand, we can also see that GNN&CNN achieves higher score of AUPR than BLM and DTIP_MDHN. This is because GNN&CNN has high sensitivity with reliable negative samples.

Discussion
In this paper, we propose a novel drug-target interactions (DTIs) prediction method incorporating marginalized denoising model on heterogeneous networks with association index kernel matrix and latent global association. We combine the chemical structure similarity matrix of drugs, the sequence similarity matrix of targets with the GIP kernel  matrix and the association index kernel matrix to construct final kernel matrix. We use the association index kernel matrix to enhance the relevance between drugs and targets by calculating the sharing association between drugs and targets. In the building model step, we build a heterogeneous network with drug kernel matrix, target kernel matrix, and existing drug-target interaction network to construct global links for drugs, targets and known drug-target interactions, and further extract latent global associations from the heterogeneous network. The latent global associations between drugs and targets are important to reduce the data sparsity.
The experimental results on benchmark dataset show that our proposed prediction method outperforms the existing binary classification predicting methods and MFbased predicting methods in term of AUC and AUPR. Specifically, for the sparser datasets such as GPCR and NR, the prediction accuracy of our method is increased of 10% 20% than other comparative methods. To compare the effects of different final kernel matrices on the DTIs prediction results, we evaluated our constructed final kernel matrices with other two final kernel matrices in GIP [9] and DNILMF [19]. The experimental results indicate that our constructed final kernel matrices of drugs and targets indeed leads to more accurate predictions than the final kernel matrices in GIP [9] and DNILMF [19]. To evaluate our proposed prediction model with supervised learning models SVM, RF, and Matrix Factorization (MF) model DNILMF, we extracted our constructed final kernel matrices KFJD/KFJT as the features of drug-target pairs, drugtarget interaction matrix Y as the classification labels. The experimental results show that our proposed prediction model achieves higher predictions accuracy than SVM, RF, and DNILMF in DTIs prediction. We also evaluated the key parameters noise and k within a certain value range to optimize the prediction accuracy. The results show that DTIP_MDHN obtains higher predictions accuracy on the four datasets when noise = 0.65, and the optimized value of k vary with the number of drugs for different datasets.  To enhance the diversity of experiment data and inspect the performance of our proposed method on the new database, we evaluated our method DTIP_MDHN and the method DNLMF for the 8 classes of target proteins extracted from the current KEGG BRITE database. The experimental results also show that the scores of AUC and AUPR of DTIP_MDHN are higher than that of DNLMF on the compiled new DTIs database.
The experimental results on Davis and Metz datasets show that our method also can improve the accuracy for predicting drug-target binding affinity. For the continuous values of K d and K i , we evaluated our method with two association index method, Pearson Correlation Coefficient (PCC) and Jaccard index, respectively. The experimental results show that PCC is more suitable to measure the relationship between two continuous variables, while Jaccard kernel matrix works well on binary interaction matrix.
To inspect our method DTIP_MDHN for large-scale compound-protein interaction (CPIs) prediction, we evaluated DTIP_MDHN with BLM [7] and a deep learning model combining GNN and CNN [26] on our new dataset 2. The experimental dataset contains 13,286 drugs, 5313 targets, and 116,199 interactions. This dataset is much sparser than the benchmark dataset and new dataset 1. The experimental results indicate that our method DTIP_MDHN can identify true negatives from the sparse dataset more accurately than other comparative methods.

Conclusion
The performance improvement in our method depends on the association index kernel matrix and latent global association. The association index kernel matrix calculates the sharing relationship between drugs and targets. The latent global association addresses the false positive issue caused by network link sparsity. Our method can provide a useful approach to recommend new drug candidates and reposition existing drugs.
The features of a drug-target pair can be characterized more accurately by the biologic physicochemical properties. One future research direction is to use the key biologic physicochemical properties with feature selection method to improve similarity  The best results in each column are in bold measurement in pharmacology, and extend our method to predict potential interaction relationship in other biologic interaction networks that play a part in pharmacology. Meanwhile, with application of deep learning in the field of drug discovery [23][24][25][26][27], it is also another future research direction for predicting drug-target interactions using deep learning framework on multiple information including biologic physicochemical properties.

Problem description
Given a set of drugs D = {d 1 ,d 2 , …,d n } and a set of target proteins T = {t 1 ,t 2 , …,t m }, a drug similarity matrix SD ∈ ℝ n × n , a target similarity matrix ST ∈ ℝ m × m , and a matrix of known interactions Y ∈ ℝ n × m between drugs and targets are defined, where n is the number of drugs, m is the number of target proteins, and each item Y ij ∈{0, 1}, i = 1,2, …,n, and j = 1,2, …,m. If drug d i has a known interaction with target t j , the value of Y ij is 1, otherwise is 0, i = 1,2, …,n, and j = 1,2, …,m. The goal of drug-target interactions (DTIs) prediction is to recommend new drug-target pairs using above three matrices and other source of information.
The prediction of old drug repositioning is to predict the interaction probability of drug and target when drug and target are known but drug has no known interaction with target. The prediction of new drug/target discovery is to predict the interaction probability of drug and target when drug is newly developed and target is a known protein or a protein target is newly identified and drug is a known compound.
We illustrated the prediction scenarios on old drug repositioning, new drug/target discovery in Fig. 3. There are 5 drugs (i.e., D1 -D5) and 4 targets (i.e., T1 -T4) in Fig.  3. For the D1-T1 interaction pair in a circle, D1 is a known drug, T1 is a known target, and the prediction result on D1-T1 pair is the old drug repositioning in Fig. 3a; D1 is a new drug, T1 is a known target, and the prediction result on D1-T1 pair is the new drug discovery in Fig. 3b; D1 is a known drug, T1 is a new target, and the prediction result on D1-T1 pair is the new target discovery in Fig. 3c [19].

Datasets
The benchmark datasets were originally provided by Yamanishi et al. [6]. The datasets are publicly available at http://web.kuicr.kyoto-u.ac.jp/supp/yoshi/drugtarget/. Protein sequences of targets were obtained from the KEGG GENES database [32]. The target similarity matrix is composed of the sequence similarity score between proteins, and it is computed by a normalized version of Smith-Waterman score [42]. Chemical compounds were obtained from the KEGG DRUG and COMPOUND databases [32]. The drug similarity matrix is composed of the chemical structure similarity score between drugs, and it is computed by the SIMCOMP tool [43]. The drug-target interaction matrix is composed of the known drug-target interaction pairs retrieved from databases of KEGG BRITE [32], SuperTarget [33], DrugBank [34], and BRENDA [38]. The benchmark datasets contain four datasets. The first one is enzymes containing 445 drugs and 664 targets. The second one is ion channels (IC) containing 210 drugs and 204 targets. The third one is G-protein coupled receptors (GPCR) containing 223 drugs and 95 targets. And the last one is nuclear receptors (NR) containing 54 drugs and 26 targets. Table 16 lists the statistics for the benchmark datasets [6].
In the past decade, an exponential growth of chemical biology data available in the public databases, such as KEGG [32], SuperTarget [33], Drugbank [34], ChEMBL [35], and STITCH [41]. To enhance the diversity of experimental datasets and inspect our proposed predicting method for the latest database, we extracted two new DTIs datasets from KEGG and STITCH respectively.
For new dataset 1, we obtained the classification information of drugs based on the "target-based classification of drugs" in the KEGG BRITE database, 2 including 8 datasets which are enzymes, ion channels (IC), G protein-coupled receptors (GPCR), nuclear receptors (NR), Cytokines and receptors (CR), Cell surface molecules and ligands (CSM), Protein kinases (PK), and Transporters (TR). The chemical structure similarity matrix of drugs is computed by the SIMCOMP2 tool. 3 Protein sequence similarity matrix of targets is composed of the scores derived from KEGG SSDB Paralog database. After deleting the redundant and invalid data of drugs, targets, and drug-target interaction pairs, we obtained a total of 8 new datasets containing 11,912 known interactions, 4495 unique drugs, and 959 unique targets. The statistics for new dataset 1 are listed in Table 17. The detailed drug target interaction information can be referred to Additional file 2.
As shown in Table 17, the amounts of drugs and targets in enzymes, ion channels (IC), G protein-coupled receptors (GPCR), and nuclear receptors (NR) are significantly different from that of the corresponding datasets in benchmark datasets. These datasets are important supplement to benchmark datasets in the experimental verification.
To inspect our proposed method for predicting large-scale compound-protein interactions (CPIs), we retrieved CPIs of Homo sapiens from STITCH database (Version 5.0) [41] as new dataset 2. 4 The compound similarity matrix is derived from the scores of chemical_chemical links in STITCH database. 5 Similarly, the protein sequence similarity matrix is obtained as new dataset 1. After deleting the redundant and invalid data Fig. 3 Illustration of the prediction scenarios on old drug repositioning and new drug/target discovery. There are 5 drugs (i.e., D1 -D5) and 4 targets (i.e., T1 -T4). For the D1-T1 interaction pair in a circle, D1 is a known drug, T1 is a known target, and the prediction result on D1-T1 pair is the old drug repositioning in (a); D1 is a new drug, T1 is a known target, and the prediction result on D1-T1 pair is the new drug discovery in (b); D1 is a known drug, T1 is a new target, and the prediction result on D1-T1 pair is the new target discovery in (c) of drugs, targets, and drug-target interaction pairs, we obtained 5,979,099 interactions between 15,324 unique proteins in Homo sapiens and 224,203 unique compounds.
To validate our proposed method for predicting drug-target binding affinity, we selected two kinase datasets from the studies by Davis et al. [44] and Metz et al. [45] respectively. These two datasets are available at http://staff.cs.utu.fi/~aatapa/ data/DrugTarget/. In Davis dataset [44], the target similarity matrix is computed by a normalized version of Smith-Waterman score [42]. There are 3 drug similarity matrices in Davis dataset, two-dimensional and three-dimensional Tanimoto coefficients similarity matrices, and the extended-connectivity fingerprint ECFP4 [46] similarity matrix. The drug-target interaction affinity matrix used kinase disassociation constant (K d ). There are 68 drugs, 442 targets, and 1527 interactions in Davis dataset.
In Metz dataset [45], the target similarity matrix is computed by a normalized version of Smith-Waterman score [42]. The drug similarity matrix is a two-dimensional Tanimoto coefficients similarity matrix. The drug-target interaction affinity matrix used kinase inhibition constant (K i ). There are 1421 drugs, 156 targets, and 3200 interactions in Metz dataset.
The statistics for these two kinase datasets are listed in Table 18.

Method
We propose a new method to learn drug kernel matrix and target kernel matrix. We integrate drug kernel matrix, target kernel matrix, and drug-target interaction network to build a heterogeneous network. We apply the marginalized denoising model on heterogeneous network to improve the accuracy of drug-target interaction prediction. Our proposed prediction method consists of the following four steps: Step 1: Calculate drug kernel matrix KFJD by combining drug similarity matrix SD, Gaussian interaction profile kernel matrix for drugs KGD, and association index kernel matrix for drugs KJD, where KGD and KJD are constructed from drug-target interaction network Y.
Step 2: Calculate target kernel matrix KFJT by combining target similarity matrix ST, Gaussian interaction profile kernel matrix for targets KGT, and association index kernel matrix for targets KJT, where KGT and KJT are constructed from Y′ which is the transpose of drug-target interaction network Y. Step 3: Construct a heterogeneous network M by drug kernel matrix KFJD, target kernel matrix KFJT, and drug-target interaction network Y.
Step 4: Create a marginalized denoising model (MDM) on the constructed heterogeneous network M with local and global associations between nodes (targets and drugs) to predict latent drug-target interaction pairs.
The procedure of our proposed prediction method is shown in Fig. 4.

Constructing final kernel matrix
The final kernel matrix combines different kernels with drug similarity matrix and target similarity matrix for potential DTIs prediction. Based on kernel fusion [18,19], we calculate drug kernel matrix by combining drug similarity matrix with GIP kernel matrix and Jaccard kernel matrix, and calculate target kernel matrix by combining target protein similarity matrix with GIP kernel matrix and Jaccard kernel matrix. The final drug kernel matrix KFJD and final target kernel matrix KFJT are calculated according to the following steps.
Firstly, GIP kernel matrix for drugs KGD and GIP kernel matrix for targets KGT are calculated respectively [9]: where y d i and y d j are interaction profiles of drugs d i and d j respectively, which are represented by binary vectors encoding presence or absence of interaction with every target in interaction matrix Y. Similarly, y t i and y t j are interaction profiles of targets t i and t j respectively, which are represented by binary vectors encoding presence or  absence of interaction with every drug in interaction matrix Y. Parameters γ d and γ t are used to control kernel bandwidth and are defined as follows [9]: Secondly, Jaccard profile kernel matrix for drugs and Jaccard profile kernel matrix for targets are calculated respectively.
Jaccard index [47] is commonly used in association index. Compared with cosine, Pearson correlation coefficient, and other association index, Jaccard index is more suitable for binary data with high sparsity, and Jaccard index is used to measure the degree of sharing association between two nodes in biological interaction network [40]. Hence, we use Jaccard index to construct an association index kernel matrix between drugs and an association index kernel matrix between targets in DTIs network respectively. Next, we discuss how to calculate Jaccard kernel matrix for drug KJD and Jaccard kernel matrix for target KJT.
The value of Jaccard index kernel for drugs d i and d j in DTIs network, KJD d i ;d j , is calculated as follows [40]: where D 01 , D 10 , and D 11 are three parameters to measure the sharing relationship between d i and d j. D 01 is total number of targets when the value of Y(d i , t k ) is 0 and the value of Y (d j , t k ) is 1, D 10 denotes total number of targets when the value of Y(d i , t k ) is 1 and the value of Y(d j , t k ) is 0, D 11 represents total number of targets when the value of Y(d i , t k ) is 1 and the value of Y(d j , t k ) also is 1, where Y is the target-drug interaction matrix, and t k is a target contained in Y, i, j = 1,2,…,n, and k = 1,2,…,m. Similarly, the value of Jaccard index kernel for targets t i and t j in DTIs network, KJT t i ;t j , is computed as follows [40]: where T 01 , T 10 , and T 11 are three parameters to measure the sharing relationship between t i and t j. T 01 is total number of drugs when the value of Y(d k , t i ) is 0 and the value of Y(d k , t j ) is 1, T 10 denotes total number of drugs when the value of Y(d k , t i ) is 1 and the value of Y(d k , t j ) is 0, T 11 represents total number of drugs when the value of (See figure on previous page.) Fig. 4 Procedure of our proposed predicting method. Drug kernel matrix KFJD was calculated by combining drug similarity matrix SD, GIP kernel matrix for drugs KGD, and association index kernel matrix for drugs KJD, where KGD and KJD are constructed from drug-target interaction network Y (seen in step 1). target kernel matrix KFJT was calculated by combining target similarity matrix ST, GIP kernel matrix for targets KGT, and association index kernel matrix for targets KJT, where KGT and KJT are constructed from Y′ which is the transpose of Y (seen in step 2). Next, a heterogeneous network M was constructed by drug kernel matrix KFJD, target kernel matrix KFJT, and drug-target interaction network Y (seen in step 3). Finally, a marginalized denoising model (MDM) was created on the heterogeneous network M with local and global associations between nodes (targets and drugs) to predict latent drug-target interaction pairs (seen in step 4) Y(d k , t i ) is 1 and the value of Y(d k , t j ) also is 1, where Y is the target-drug interaction matrix, and d k is a drug contained in Y, i, j = 1,2,…,m, and k = 1,2,…,n. Thirdly, based on the nonlinear kernel fusion technique [17,18], the final drug kernel matrix KFJD is calculated according to three matrices SD, KGD and KJD, and the final target kernel matrix KFJT is calculated according to three matrices ST, KGT and KJT.
The calculation for KFJD is described as follows.
The three kernel matrices SD, KGD, and KJD are first normalized according to Hao's method [18]. The normalized matrices are denoted by PD1, PD2, and PD3 respectively [18]: Then, we apply the k nearest neighbors (kNN) algorithm to compute local similarity matrices LD1, LD2, and LD3 for PD1, PD2, and PD3 respectively [18]: where N i denotes the k nearest neighbors of drug d i , i = 1,2,…,n. In formula (6), the similarity between any two non-nearest neighbors is set to zero to reduce the influence on prediction results from the non-nearest drug-target interaction pairs. The key step of fusion operation is an iterative calculation [18]: where PD 1 tþ1 , PD 2 tþ1 , and PD 3 tþ1 are the results of PD1, PD2, and PD3 after t iterations respectively, and LD1 ′ , LD2 ′ , and LD3 ′ are the transposes of LD1, LD2, and LD3 respectively.
During each iteration, the values of PD 1 tþ1 , PD 2 tþ1 , and PD 3 tþ1 are further updated by where I is an identity matrix.
After t iterations, the final drug kernel matrix KFJD can be obtained by [17]: Similarly, the final target kernel matrix KFJT can be obtained as follows: More detailed description about the kernel fusion can be seen in [18,19,48].

Marginalized denoising model
Our method treats DTIs prediction problem as network link prediction problem. We use Marginalized denoising model (MDM) [31] on heterogeneous network composed of the final drug and target kernel matrices and the known drug-target interaction matrix to predict potential DTIs. Marginalized denoising model [31] is inspired by the idea of marginalized denoising auto-encoders [49]. Auto-Encoder (AE) is a type of artificial neural networks, which is used to learn efficient data coding in an unsupervised manner [50,51]. The AE encodes original input dataset x with weight w into latent representation h and decodes h into output y, where h = f(x) and y = g(h). The AE is trained to minimize reconstruction error Lðx; gð f ðxÞÞÞ to guarantee that output y closely matches original data x. The AE is widely used to extract features and reduce dimensionality. The AE can also be used to learn new features.
Denoising Auto-Encoder (DAE) [52] transforms original input dataset x into partially corrupted inputx and trainsx to recover undistorted original input x. To train an auto-encoder to denoised data, a preliminary stochastic mapping x→x is performed to corrupt the data, andx with weight w is used as an input for normal auto-encoder. The loss function of DAE is represented by Lðx; gð f ðxÞÞÞ instead of Lðx; gð f ðxÞÞÞ. The corrupted inputx can be constructed by randomly setting original input x to zero with given probability p, where 0 < p < 1. The original noises in original input dataset x are removed during the corrupting process. To a certain extent, the training data are close to the testing data after the training data are denoised, and the robustness of weight w is enforced after training.
Marginalized denoising auto-encoder (mDA) [49] is a variant of DAE. The mDA is used to solve the problem with high computational cost of the DAE. "Marginalized" means that the loss function Lðx; gð f ðxÞÞÞ is approximated by the expected value E kLðx; gð f ðxÞÞÞk pðxjxÞ of loss function with conditional distribution pðxjxÞ based on the weak law of large number [53].

Our prediction method
The latent drug-target interactions are impacted by the existing drug-target interaction pairs in the drug-target interaction network. The probability of predicting drug-target interactions may also be influenced by the matrix of similarities between drugs and the matrix of similarities between targets [19].
We treat the drug-target interactions (DTIs) prediction problem as network link prediction problem. To improve the prediction accuracy, we propose a DTIs prediction method using marginalized denoising model on heterogeneous network. The heterogeneous network can be represented by matrix M ¼ where KFJT ∈ ℝ m × m is the target kernel matrix, KFJD ∈ ℝ n × n is the drug kernel matrix, Y ∈ℝ n × m is the drug-target interaction network, Y′ is the transpose of Y, m is the number of targets, and n is the number of drugs.
To generate the training data, we inject random noise to original input matrix M to construct the corrupted matrixM. The set of corrupted matricesM ¼ fM 1 ;M 2 ; …;M c g is the training data. Then, we train the mapping function hðMÞ such that the final output M* closely matches the original matrix M. That is to minimize the loss function LðhðMÞÞ: where the mapping function hðMÞ consists of the latent local and global associations between any two drug or target nodes in M, k:k 2 F denotes the Frobenius norm of matrix,M s in corrupted matrices setM are constructed by randomly setting the value of elements in M to zero with given probability p, where 0 < p < 1, b i is a bias value, L is local association weighted matrix, P mþn l¼1 L ilMlj is latent local interaction between nodes i and j via node l, G is global association weighted matrix and P mþn l¼1 P mþn k¼1M il G lkMjk is latent global association between nodes i and j via nodes l and k, 1 ≤ i, j ≤ m + n.
We illustrate an example of latent global association in Fig. 5. The solid line shows the existing association, and the dashed line shows latent global association.
As shown in Fig. 5a, if drug d i is highly similar to drug d l , d l has an interaction with target t k , and t k is highly similar to target t j , then d i has an interaction with t j with high probability. We can also see from Fig. 5b that, if both drugs d i and d k have an interaction with target t l , and d k has an interaction with target t j , then d i has an interaction with t j with high probability. The latent global association represents the weighted value of indirect drug-target interaction. The iterative training with latent local and global associations will obtain a more precise drugtarget interaction prediction result M*.
To prevent loss function LðhÞ from overfitting and enhance the learning performance, we construct a new objective function LðL; G; bÞ by Tikhonov regularization terms: where L and G represent latent local and global association weighted matrices respectively, b is a bias vector, 1 n denotes an all-one column vector of size n, and λ 1 and λ 2 are the regularization coefficients. Tikhonov regularization is used to ensure the smoothness of fitting curves of L and G [54] .
In the denoising auto-encoder, the more the training data used, the more accurate the prediction results are. Ideally, we use infinite training data to compute weight matrices L and G. However, when the size of setM is increased, the computation cost becomes more expensive. According to the weak law of large number [53], when the size of setM becomes very large, we can rewrite the sum part of formula (12) into the expectation form as follows: where pðMjMÞ is a conditional distribution, and the expectation is with respect to the random variableM.
To apply formula (13) in large data matrix, low rank approximation is used [31]. Formula (13) is rewritten with respect to L = UU T and G = VV T as follows: where U, V ∈ℝ (m + n) × k , k is the dimension of latent variables U and V, tr(*) represents the trace of matrices, and I is the identity matrix.
To minimize the norm function LðU; V ; bÞ , the partial gradient of formula (14) is calculated with respect to U, V and b as follows: , if drug d i is highly similar to drug d l , d l has an interaction with target t k , and t k is highly similar to target t j , then d i has an interaction with t j with high probability. As shown in (b), if both drugs d i and d k have an interaction with target t l , and d k has an interaction with target t j , then d i has an interaction with t j with high probability Given q as the residual probability forM, q = 1-p, we label a constant matrix containing noM as C, and calculate the gradients for different terms ofM. For a term containing only oneM, E½CM ¼ CE½M ¼ qCM. For a term containing twoMs, we need to analyze the cases that the twoM s are the same or not, e.g., if the twoM s are the same, E½M The term containing twoM s, E½M T CM, is given in formula (18) [31]: For the term containing three or moreM s, we need to analyze the cases that all thẽ M s are the same or any twoM s are the same or all theM s are different. The term containing threeM s, E½MCM TMT , is given as follows [31]: where the function diag(*) outputs the diagonal elements of a matrix, the operator°denotes the Hadamard product (element-wise product), and the function sum( * , 2) outputs the sum by rows of a matrix. We use the L-BFGS (Limited-memory BFGS) [55] to optimize the objective functions with respect to latent variables U, V, and b. The L-BFGS [55] is an optimization algorithm in the family of quasi-Newton methods. The Newton's method is an iterative optimization using Taylor's second-order expansion. The Newton's method finds extrema for loss function by computing Hessian matrix. It is too expensive to compute Hessian matrix for every iteration. The L-BFGS algorithm optimizes the calculation of Newton's method and simplifies the calculation of Hessian matrix. L-BFGS has the feature of fast convergence and no storage of Hessian matrix.
Finally, we calculate the final matrix M * = UU ′ M + MVV ′ M ′ + b, and compute the evaluation metrics AUC (area under curve of receiver operating characteristic) and AUPR (area under precision-recall curve) by comparing M with M * .
Based on the above steps, we propose a drug-target interaction prediction algorithm using marginalized denoising model on heterogeneous network called DTIP_MDHN, in which its input files SDFile, STFile and YFile are derived from http://web.kuicr.kyoto-u.ac.jp/supp/yoshi/ drugtarget/, noise is the noise value, and k is the dimension of latent layer. Algorithm DTIP_MDHN is described in algorithm 1.