LPI-deepGBDT: a multiple-layer deep framework based on gradient boosting decision trees for lncRNA–protein interaction identification

Background Long noncoding RNAs (lncRNAs) play important roles in various biological and pathological processes. Discovery of lncRNA–protein interactions (LPIs) contributes to understand the biological functions and mechanisms of lncRNAs. Although wet experiments find a few interactions between lncRNAs and proteins, experimental techniques are costly and time-consuming. Therefore, computational methods are increasingly exploited to uncover the possible associations. However, existing computational methods have several limitations. First, majority of them were measured based on one simple dataset, which may result in the prediction bias. Second, few of them are applied to identify relevant data for new lncRNAs (or proteins). Finally, they failed to utilize diverse biological information of lncRNAs and proteins. Results Under the feed-forward deep architecture based on gradient boosting decision trees (LPI-deepGBDT), this work focuses on classify unobserved LPIs. First, three human LPI datasets and two plant LPI datasets are arranged. Second, the biological features of lncRNAs and proteins are extracted by Pyfeat and BioProt, respectively. Thirdly, the features are dimensionally reduced and concatenated as a vector to represent an lncRNA–protein pair. Finally, a deep architecture composed of forward mappings and inverse mappings is developed to predict underlying linkages between lncRNAs and proteins. LPI-deepGBDT is compared with five classical LPI prediction models (LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM) under three cross validations on lncRNAs, proteins, lncRNA–protein pairs, respectively. It obtains the best average AUC and AUPR values under the majority of situations, significantly outperforming other five LPI identification methods. That is, AUCs computed by LPI-deepGBDT are 0.8321, 0.6815, and 0.9073, respectively and AUPRs are 0.8095, 0.6771, and 0.8849, respectively. The results demonstrate the powerful classification ability of LPI-deepGBDT. Case study analyses show that there may be interactions between GAS5 and Q15717, RAB30-AS1 and O00425, and LINC-01572 and P35637. Conclusions Integrating ensemble learning and hierarchical distributed representations and building a multiple-layered deep architecture, this work improves LPI prediction performance as well as effectively probes interaction data for new lncRNAs/proteins.


Introduction
Long noncoding RNAs (lncRNAs) are a class of important noncoding RNAs with the length more than 200 nucleotides. The class of RNAs have been reported to have dense associations with multiple biological processes including RNA splicing, transcriptional regulation, and cell cycle [1][2][3]. More importantly, the mutations and dysregulations of lncRNAs have important affects on multiple cancers [4,5], for instance, lung cancer [6], colon cancer [7], and prostate cancer [8]. For example, lncRNAs UCA1, PCA3, and HOTAIR have been used as possible biomarkers of bladder cancer detection, prostate cancer aggressiveness, and hepatocellular carcinoma recurrence, respectively [9][10][11]. Although lncRNAs have been intensively investigated, functions and molecular mechanisms of lncRNAs still largely remain elusive [2,12]. Recent researches have revealed that lncRNAs densely link to the corresponding binding-proteins. Therefore, the identification of the binding proteins for lncRNAs is urgent for better understanding the biological functions and molecular mechanisms of lncRNAs [1].
Although wet experiments for lncRNA-protein Interaction (LPI) discovery have been designed, computational methods are appealing to infer the relevances between lncR-NAs and proteins [13]. The computational methods can be roughly divided into two categories: network-based methods and machine learning-based methods. Network-based LPI inference methods integrated various biological data and designed network propagation methods to find potential LPIs in the heterogeneous lncRNA-protein network. For example, Li et al. [14] proposed a random walk with restart-based LPI prediction model. Zhou et al. [15] took miRNAs as mediators to predict LPIs in a heterogeneous network (LPI-HNM). Yang et al. [16] used the HeteSim algorithm to compute the associated scores between lncRNAs and proteins. Zhao et al. [17], Ge et al. [18], and Xie et al. [19] explored a few bipartite network projection-based recommendation techniques to compute the interaction probabilities between lncRNAs and proteins. Zhang et al. [20] explored a novel LPI prediction framework combining a linear neighborhood propagation algorithm. Zhou et al. [21] combined similarity kernel fusion and Laplacian regularized least squares to find unobserved LPIs (LPI-SKF).
Machine learning-based LPI inference methods characterized the biological features of lncRNAs and proteins and exploited machine learning algorithms to probe LPI candidates [22]. Machine learning-based LPI prediction methods contain matrix factorization techniques and ensemble learning techniques [23]. Matrix factorization-based LPI prediction approaches used various matrix factorization techniques. Liu et al. [24] identified new LPIs combing neighborhood regularized logistic matrix factorization. Zhao et al. [25] inferred LPI candidates combining the neighborhood regularized logistic matrix factorization model and random walk. Zhang et al. [26] proposed a graph regularized nonnegative matrix factorization method to uncover unobserved LPIs.
Ensemble learning-based LPI inference methods utilized diverse ensemble techniques. Zhang et al. [27] exploited an ensemble learning model to discover the interactions between lncRNAs and proteins. Liu et al. [24] designed three ensemble strategies to predict LPIs based on support vector machine, random forest and extreme gradient boosting, respectively. Deng et al. [1] extracted HeteSim features and diffusion features of lncRNAs and proteins and constructed a gradient tree boosting-based LPI prediction algorithm (PLIPCOM). Fan and Zhang [28] explored a stacked ensemble-based LPI classification model via logistical regression (LPI-BLS). Deng et al. [29] proposed a gradient boosted regression tree for finding possible LPIs. Wekesa et al. [30] designed a categorical boosting-based LPI discovery framework (LPI-CatBoost). In addition, deep learning (such as deep graph neural network [31]) is increasingly developed to identify LPI candidates.
Computational methods effectively identified potential LPIs. However, there are a few problems to solve. First, the majority of computational models were evaluated on one dataset, which may result in predictive bias. Second, they were not used to infer potential proteins (or lncRNAs) associated with a new lncRNA (or protein). Finally, their prediction performance need to further improve.
To solve the above problems, in this study, inspired by Gradient Boosting Decision Trees (GBDT) provided by Feng et al. [32], we exploit a multiple-layer Deep structure with GBDT to predict unobserved LPIs (LPI-deepGBDT). First, five LPI datasets are constructed. Second, lncRNA and protein features are extracted by Pyfeat and BioProt, respectively. Third, a feature vector is built to represent an lncRNA-protein pair. Finally, a multiple-layer deep architecture integrating tree ensembles and hierarchical distributed representations is developed to classify lncRNA-protein pairs.
The remaining of this manuscript is organized as follows. "Materials and methods" section describes data resources and the LPI-deepGBDT framework. "Results" section illustrates the results from a series of experiments. "Discussion and further research" section discusses the LPI-deepGBDT method and provides directions for further research.

Data preparation
In this manuscript, we collect three human LPI datasets and two plant LPI datasets. Dataset 1 provided by Li et al [14] contains 3,487 LPIs from 938 lncRNAs and 59 proteins. 3,479 LPIs between 935 lncRNAs and 59 proteins are finally obtained by removing the lncRNAs without sequence information in the NONCODE [33], NPInter [34] and UniProt [35] [37] and LPIs are extracted at http://bis.zju.edu.cn/PlncRNADB/. The details are described in Table 1.
We denote an LPI network via a matrix Y:

Overview of LPI-deepGBDT
In this study, we develop a feed-forward deep framework to infer new LPIs. Figure 1 describes the flowchart of LPI-deepGBDT. As shown in Fig. 1, the LPI-deep-GBDT framework consists of three main processes after LPI datasets are built. (1) Feature extraction. Pyfeat [38] and BioProt [39] are used to extract the original features for lncRNAs and proteins. (2) Feature selection. The lncRNA and protein features are reduced into two d-dimensional vector based on dimensional reduction analysis with Principle Component Analysis (PCA). The two vectors are then concatenated to depict lncRNA-protein pairs. (3) Classification. A multiple-layer deep (1) y ij = 1, if lncRNAs l i interacts with protein p j 0, otherwise  structure, composed of forward mapping and inverse mapping, is developed to classify lncRNA-protein pairs.

Feature extraction of lncRNAs
Pyfeat [38] is widely applied to generate numerical features via sequence information.
In this study, we use Pyfeat to obtain lncRNA features and represent an lncRNA as a 3, 051-dimensional vector. The details are shown in Table 2.

Feature extraction of proteins
BioProt [39] utilizes various information to represent a protein.
In this study, we use Bio-Prot to obtain protein features and represent each protein as a 9,890-dimensional vector. The details are shown in Table 3.

Dimension reduction
The feature dimensions of lncRNAs and protein are reduced based on PCA, respectively. Two d-dimensional feature vectors are obtained and concatenated as a 2d-dimensional vector x applied to represent an lncRNA-protein pair.

Problem description
For a given LPI dataset D = (X, Y ) , where (x, y) represents an lncRNA-protein pair (a training example), x ∈ X denotes a 2d-dimensional LPI feature vector and y ∈ Y denotes its label, we aim to classify unknown lncRNA-protein pairs. For a feed-forward deep architecture with one original input layer, one output layer and (m-1) intermediate layers, suppose that o i ( i ∈ {0, 1, 2, · · · , m} ) denotes the output in the i-th layer. For an lncRNA-protein pair (x, y) , we want to learn a mapping F i based on GBDT to minimize the empirical loss L between the desired output y and the final real output o m on the training data.

Gradient boosting decision trees
GBDT can generate highly robust, interpretable and competitive classification procedures, especially for exploiting less than clean data [29,40,41]. For an lncRNAprotein pair (x, y) , an estimator f (x) denotes an approximate function response to the label y , the GBDT model iteratively builds K different individual decision tree {g(x; α 1 ), . . . , g(x; α K )} using the training data D = (X, Y ) . And f (x) can be denoted as an expansion of individual decision tree g(x; α k ) by Eq. (2).
where each tree splits the input space into N disjoint regions {R 1k , · · · , R jk } and calculates a constant value γ ik for the region R jk where I = 1 if x ∈ R jk ; I = 0, otherwise . f k (x) denotes an addition function combined from the first decision tree to the k-th decision tree. The parameters α k denotes the mean values of partition locations and the terminal leaf nodes for each partitioning variables in the k-th decision tree. The parameters β k denotes the weights used to determine how to effectively integrate the prediction results from individual decision trees when the leaf nodes of each collection are known. The two parameters α k and β k can be estimated by minimizing a loss function L(y, f (x)) by Eq. (3).
The parameters β k can be determined by Eq. (7).
The estimator f k (x) for the k-th regression tree can be updated by Eq. (8) The final estimator f (x) can be obtained by Eq. (9) The gradient boosting approach calculates the optimal values of the parameters α m via minimizing the least square function defined by Eq. (5). The parameters β m can be solved by Eqs. (5) and (7). And the GBDT algorithm is described as Algorithm 1. (3)

The multi-layered deep architecture with GBDT
We exploited a multi-layered deep architecture with GBDT to classify unknown lncRNA-protein pairs. Firstly, m gradient boosting decision trees are initialized. Initial forward mapping, inverse mapping, and output are then computed. Second, pseudo-label in the m-th layer is obtained based on the initialized output and real label. Third, the forward mapping for each regression tree is iteratively updated based on the computed pseudo-label at the last iteration. Fourth, the inverse mapping is iteratively learned based on the achieved forward mapping at the last iteration. Finally, the final label is output after m iterations.
Phase I: Initialize GBDT It is very difficult to design a random tree structure based on the distribution from all potential tree configurations. Therefore, multiple Gaussian noise data are injected to the output in all intermediate layers. Given a deep structure with m layers, the initial forward mapping F 0 i ( i ∈ {1, 2, ..., m} ) and the inverse mapping G 0 i ( i ∈ {2, 3, ..., m} ) are obtained by a few very tiny trees, where index 0 represents the tree structures achieved in the initialization procedure. In addition, the initial output o 0 is set as X . The iterations are updated based on the learned forward mappings and inverse mappings. At each iteration t, we conduct Phases II-IV.

Phase II: Compute the pseudo-label in the m-th layer
The pseudo-label in the m-th layer can be computed based on the final output o m and the real label y , α is the learning rate by Eq. (10)

Phase III: Forward mapping
At the t-th iteration, during the forward mapping, F t i for each regression tree in a GBDT is first initialized by F t i = F t−1 i and updated based on a pseudo-labels p t i−1 with p t i−1 = G i (p t i ) . The details are described as follows. For each regression tree in a GBDT, we define a reconstruction loss function as Eq. (11).
The pseudo-residuals for each tree can be computed by Eq. (12).
When the pseudo-label in each layer is calculated, each F t−1 i can implement a gradient ascent towards its pseudo-residual by Eq. (12).
Each regression tree g k is fitted to r forw k based on the training set ( o i−1 , r forw k ) and the forward mapping F t i for each tree can be updated by Eq. (13).
Finally, we obtain the output for each layer by the forward mapping by Eq. (14).
The forward mapping procedures are described as Algorithm 2.
In this phase, we use a bottom up update technique, that is, F i will be updated before F j when i < j . In addition, each F i can run multiple rounds of additive boosting operations towards its current pseudo-label.
Phase IV: Inverse mapping At the t-th iteration, for each decision tree, given the forward mapping F t−1 i learned from the (t-2)-th iteration, we intend to achieve an "pseudo-inverse" mapping G t i paired with where L inv i denotes the reconstructed loss in the i-th layer. To build a more robust and generative model, random noises σ are injected into the output in all intermediate layers: For each regression tree g k in a GBDT, the reconstructed error can be computed by Eq. (17): Based on the noise injection, each G t−1 i follows a gradient ascent towards the pseudoresiduals by Eq. (18) where r inv k denotes the pseudo-residuals of the k-th regression tree during the inverse mapping. For each regression tree g k in GBDT, we fit it to r k via the training set , r inv k ) and then update G t i by Eq. (19).
Finally, the pseudo-label in each intermediate layer can be propagated from the final layer to the first layer by Eq. (20): For all intermediate layers and the final output layer ( i ∈ {m, m − 1, ..., 2}) , the inverse mapping procedures are described as Algorithm 3.
We can obtain the inverse mapping G t i for the final output layer and all intermediate layers and the pseudo-labels p t i for the first layer and all the intermediate layers. After finishing the t-th iteration, we continue the (t + 1)-th iteration to update F i and G i .
During LPI prediction, a linear classifier Y = XW T + b is applied to the forward mapping in the m-th layer. There are two main advantages. First, the m-1 layers can re-represent the LPI features as linearly separable as possible. Second, the corresponding inverse mapping in the m-th layer does not have to be computed because the pseudo-label in the (m-1)-th layer can be obtained based on the gradient of global loss related to the output in the (m-2)-th layer.

Results
The experiments is mainly explored to empirically examine if the proposed LPI-deep-GBDT method can effectively predict new LPIs.

Evaluation metrics
The six measurements are utilized to evaluate the performance of LPI-deepGBDT: precision, recall, accuracy, F1-score, AUC and AUPR. For the six evaluation criteria, higher values depict better performance [43]. The experiments are repeatedly implemented for 20 times. The average performance for the 20 rounds is taken as the final performance. The six measurements are defined by Eqs. (21)- (24).
where TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives, respectively. Precision denotes the ratio of correctly predicted positive samples among all predicted positive samples. Recall represents the ratio of correctly predicted positive samples among all real positive samples. Accuracy denotes the ratio of correctly predicted positive and negative samples among all samples. F1-Score is harmonic mean between precision and recall. Area Under receiver operating Characteristic Curve (AUC) is used to measure the trade-off between TP ratio and FP ratio. Area Under Precision-Recall curve (AUPR) is applied to evaluate the trade-off between precision and recall.
5-fold CV on lncRNAs (CV1): 80% of lncRNAs are extracted as train set and the remaining is test set in each round.
5-fold CV on proteins (CV2): 80% of proteins are extracted as train set and the remaining is test set in each round.
5-fold CV on lncRNA-protein pairs (CV3): 80% of lncRNA-protein pairs are extracted as train set and the remaining is test set in each round.
The three CVs refer to potential LPI identification for (1) a new (unknown) lncRNA without interaction information, (2) a new protein without interaction information, and (3) lncRNA-protein pairs, respectively.

Comparison with five state-of-the-art LPI prediction methods
We compare the proposed LPI-deepGBDT framework with five classical LPI identification models to measure the classification performance and robustness of LPI-deepG-BDT, that is, LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF and LPI-HNM. The number of negative samples is set as the same as positive samples. The best performance is illustrated in boldface in each row in Tables 5, 6 and 7. Table 5 gives the comparative results of the five LPI identification models in terms of the six measurements under CV1. It can be observed that LPI-deepGBDT achieves better average recall, accuracy, F1-score, AUC and AUPR than LPI-BLS, LPI-CatBoost, PLIPCOM, and LPI-HNM on five LPI datasets. For example, LPI-deepGBDT obtains the best average F1-score value of 0.7586, 8.99%, 9.83%, 1.61%, 22.70% and 8.37% superior than LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM, respectively. More importantly, it calculates the best AUC value of 0.8321, 1.63%, 8.32%, 2.37%, 0.02% and 6.26% better than the above five models, respectively. It also achieves the best average AUPR of 0.8095, 1.85%, 5.53%, 0.77%, 0.02% and 0.24% higher than the five methods, respectively.
LPI-BLS, LPI-CatBoost, PLIPCOM and LPI-HNM are four state-of-the-art supervised learning-based LPI prediction methods and LPI-deepGBDT computes better performance than them. The results suggest the powerful classification ability of LPI-deepGBDT under CV1. More importantly, although LPI-deepGBDT computes slightly lower precision than LPI-SKF, other five measurements are better than LPI-SKF. LPI-SKF is one network-based LPI inference algorithm. The type of methods have one limitation, that is, they can not be applied to predict possible interaction information for an orphan lncRNA. Therefore, LPI-deepGBDT is appropriate to prioritize underlying proteins associated with a new lncRNA. Table 6 depicts the performance of LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-HNM, and LPI-deepGBDT under CV2. The results show that the performance of LPI-deepGBDT is slightly lower than LPI-HNM. Under CV2, 80% proteins are extracted as training set and the remaining is test set in each round. That is, there will be relatively higher proteins for which association information is masked, thereby resulting in the reduction of samples and affecting the performance of LPI-deepG-BDT. Compared to other five methods, LPI-HNM may be relatively robust to data abundant level when predicting possible lncRNAs for a new protein.
In particular, LPI-BLS is an ensemble learning-based model. LPI-deepGBDT significantly outperforms LPI-BLS based on AUC and AUPR. The results illustrate that LPI-deepGBDT may obtain better ensemble performance. In addition, LPI-CatBoost and PLIPCOM are two categorical boosting techniques. LPI-deepGBDT, integrating the idea of deep architecture, obtains better performance than the two methods. It shows that deep learning may more effectively learn the relevances between lncRNAs Table 5 The performance of five LPI prediction methods on CV1  Table 7. The comparative results demonstrate that LPI-deepGBDT computed the best average precision, recall, accuracy, F1-score, AUC, and AUPR over all datasets. For example, LPI-deepGBDT obtains the best average F1-score value of 0.8429, 14.83%, 10.77%, 3.10%, 16.73% and 18.43% superior than LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF and LPI-HNM, respectively. More importantly, it calculates the best AUC value of 0.9073, 4.93%, 11.21%, 3.32%, 0.12% and 14.49%, better than the above five models, respectively. It also achieves the best average AUPR of 0.8849, 5.82%, 8.84%, 2.59%, 2.62% and 9.13% higher than the five methods, respectively. The results characterize the superior classification performance of LPI-deepGBDT. Therefore, LPI-deepGBDT can precisely discover the potential relationships between lncRNAs and proteins based on known association information.

Metric
In addition, we investigate the performance computed by all six LPI prediction methods under the three different cross validations. The results from Tables 5, 6 and 7 show that LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-deepGBDT achieve much better performance under CV3 than CV1, followed by CV2, regardless of precision, recall, accuracy, F1-score, AUC or AUPR. Under CV3, cross validations are conducted on all lncRNA-protein pairs and 80% lncRNA-protein pairs are used to train the model Table 7 The performance of five LPI prediction methods on CV3 and the remaining 20% lncRNA-protein pairs are applied to test the model. However, under CV1 or CV2, cross validations are implemented on lncRNAs or proteins, that is, 80% lncRNAs or proteins are applied to train the model and the remaining 20% lncRNAs or proteins are used to test the model. CV3 may provide more LPI information relative to CV1 and CV2. The result suggest that abundant data contribute to improve the prediction performance of LPI identification models.

Case study
In this section, we aim to mine possible association data for a new lncRNA/protein or based on known LPIs.

Identifying potential proteins for a new lncRNA
RN7SL1 is an endogenous RNA. The lncRNA is usually protected by RNA-binding protein SRP9/14. Its increase can alter the stoichiometry with SRP9/14 and thus produce unshielded RN7SL1 in stromal exosomes. After exosome transfer to breast cancer cells, unshielded RN7SL1 can activate breast cancer RIG-I and promote tumor growth, metastasis, and therapy resistance [44]. Hepatocellular carcinoma patients with higher RN7SL1 concentrations also show lower survival rates. RN7SL1 may enhance hepatocellular carcinoma cell proliferation and clonogenic growth [45].
In this section, we mask all interaction information for RN7SL1 and want to infer possible proteins interacting with the lncRNA. The experiments are repeated for 10 times and the interaction probabilities between RN7SL1 and other proteins are averaged over the 10 time results. The predicted top 5 proteins interacting with RN7SL1 on human LPI datasets are described in Table 8. In Dataset 1, we can observe that RN7SL1 is predicted to interact with Q15465. Q15465 displays a cholesterol transferase and autoproteolysis activity in the reticulum endoplasmic. Its N-product is a morphogen required for diverse patterning events during development. It induces ventral cell fate in somites and the neural tubes. It is required for axon guidance and densely related to the anterior-posterior axis patterning in the developing limb bud [35]. In the dataset, RN7SL1 may associate with 59 proteins. In other two datasets, there does not exist any associated lncRNAs for Q15465. Although the interaction between RN7SL1 and Q15465 hasn't been validated, among all possible associated 59 proteins, the protein is ranked as 4, 6, 8, 9, and 14 by LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-HNM, and LPI-BLS, respectively. Therefore, the association between RN7SL1 and Q15465 need further validation.
In Dataset 2, we predict that Q13148, P07910, and Q9NZI8 may interact with RN7SL1. The interaction between Q9NZI8 and RN7SL1 is known in Dataset 3. Q13148 is a RNA-binding protein involved in various procedures in RNA biogenesis and processing. The protein controls the splicing in numerous non-coding and protein-coding RNAs, for example, proteins involved in neuronal survival and mRNAs encoding proteins related to neurodegenerative diseases. It plays important roles in maintaining mitochondrial homeostasis, mRNA stability and circadian clock periodicity, the normal skeletal muscle formation and regeneration. In Dataset 2, RN7SL1 may associate with 84 proteins. Among the 84 underlying proteins for RN7SL1, the rankings of Q13148 predicted by LPI-deepGBDT LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-BLS, and LPI-HNM are 2, 3, 1, 3, 2, and 6, respectively. That is, all the six LPI identification models predict that there may be interaction between Q13148 and RN7SL1. Therefore, we infer that Q13148 may possibly interact with RN7SL1.
More importantly, in Dataset 2, P07910 binds to pre-mRNA and regulates the stability and translation level of bound mRNA molecules. The protein is involved in the early procedures of spliceosome assembly and pre-mRNA splicing. In other two human LPI datasets, there are no any known associated lncRNAs for P07910. Among 84 potential associated proteins for RN7SL1, P07910 is ranked as 3,7,8,9,11, and 9 by LPI-deepGBDT, LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM, respectively. The ranking are relatively higher. Therefore, we predict that P07910 may associate with RN7SL1.
In Dataset 3, we observe that Q9UKV8 and Q9Y6M1 may interact with RN7SL1. The interactions between RN7SL1 and the two proteins can be retrieved in Dataset 1. That is, the predicted top 5 interaction data by LPI-deepGBDT can be validated by publications. In summary, the results from case analyses based on association prediction for a new lncRNA suggest that LPI-deepGBDT can be utilized to identify new proteins associated with a new lncRNA.

Finding potential lncRNAs interacting with a new protein
Q9UL18 is a protein required by RNA-mediated gene silencing. The protein can repress the translation of mRNAs complementary to them by binding to short RNAs or short interfering RNAs. It lacks endonuclease activity and thus can cleave target mRNAs. It is still required by transcriptional gene silencing of promoter regions complementary to bound short antigene RNAs [35]. In this section, we mask the interaction information for Q9UL18 and intend to find associated lncRNAs for the protein. The predicted top 5 lncRNAs on three human LPI dataset are shown in Table 9.
In Datasets 1-3, Q9UL18 may interact with 935, 885, and 990 lncRNAs. It can be seen that all the predicted top 5 interactions on each dataset are validated as known LPIs. The results suggest that LPI-deepGBDT can be applied to prioritize possible lncRNAs for a new protein.

Finding new LPIs based on known LPIs
We further infer new LPIs based on LPI-deepGBDT. We rank all lncRNA-protein pairs based on the computed average interaction probabilities. Figures 2, 3, 4, 5 and 6 give the predicted 50 LPIs with the highest interaction scores. In the five figures, black dotted lines and solid lines represent unknown and known LPIs obtained from LPI-deepGBDT, respectively. Gold ovals denote proteins, deep sky blue rounded rectangles denote RNA. There are 55,165, 74,340, 26,730, 3,815, and 71,568 known and unknown lncRNAprotein pairs on given five datasets, respectively. We observe that unknown lncRNAprotein pairs between NONHSAT023366 (RAB30-AS1) and O00425, n378107 (NONHSAT007673, GAS5) and Q15717, NONHSAT143568 (LINC-01572) and P35637, AthlncRNA376 (TCONS_00057930) and O22823, and ZmalncRNA530 (TCONS_00007931) and C0PLI2, which are predicted to have the highest association scores on the five datasets, are ranked as 1, 3, 1, 6, and 113, respectively. lncRNA GAS5 has close linkages with multiple complex diseases. The lncRNA is a repressor of the glucocorticoid receptors associated with growth arrest and starvation [46]. It is downregulated in breast cancer [47]. It cam also promote microglial inflammatory response in Parkinson's disease [48], control apoptosis in non-small-cell lung cancer [49] and prostate cancer cell [50]. Its decreased expression indicates a poor prognosis in cervical cancer [51] and gastric cancer [52].
Q15717 increases the stability of mRNA and mediates the CDKN2A anti-proliferative activity and regulates p53/TP53 expression. It increases the leptin mRNA's stability and is involved in embryonic stem cells differentiation. In dataset 2, GAS5 have been validated to interact with P35637, and Q13148. P35637 plays an important role in diverse cellular processes including transcription regulation, DNA repair and damage response, RNA transport, and RNA splicing. It helps RNA transport, mRNA Table 9 The predicted top 5 lncRNAs interacting with Q9UL18 stability and synaptic homeostasis in neuronal cells. Q13148 plays a crucial role in maintaining mitochondrial homeostasis. It participates in the formation and regeneration of normal skeletal muscle, negatively regulates the expression of CDK6. The three proteins are RNA-binding proteins and have in part similar biological functions. Therefore, we infer that Q15717 may be the corresponding protein of GAS5.

Discussion and further research
lncRNAs regulate many important biological processes. They have close relationships with multiple human complex diseases. However, most of them are not annotated because of the poor evolutionary conservation. Recent researches suggest that lncR-NAs implement their functions by binding to the corresponding proteins. Therefore, it is a significant work to infer potential interactions between lncRNAs and proteins. Various computational methods were designed to identify new LPIs. These models improved LPI prediction and found many potential linkages between the two entities.