Skip to main content

LPI-deepGBDT: a multiple-layer deep framework based on gradient boosting decision trees for lncRNA–protein interaction identification

Abstract

Background

Long noncoding RNAs (lncRNAs) play important roles in various biological and pathological processes. Discovery of lncRNA–protein interactions (LPIs) contributes to understand the biological functions and mechanisms of lncRNAs. Although wet experiments find a few interactions between lncRNAs and proteins, experimental techniques are costly and time-consuming. Therefore, computational methods are increasingly exploited to uncover the possible associations. However, existing computational methods have several limitations. First, majority of them were measured based on one simple dataset, which may result in the prediction bias. Second, few of them are applied to identify relevant data for new lncRNAs (or proteins). Finally, they failed to utilize diverse biological information of lncRNAs and proteins.

Results

Under the feed-forward deep architecture based on gradient boosting decision trees (LPI-deepGBDT), this work focuses on classify unobserved LPIs. First, three human LPI datasets and two plant LPI datasets are arranged. Second, the biological features of lncRNAs and proteins are extracted by Pyfeat and BioProt, respectively. Thirdly, the features are dimensionally reduced and concatenated as a vector to represent an lncRNA–protein pair. Finally, a deep architecture composed of forward mappings and inverse mappings is developed to predict underlying linkages between lncRNAs and proteins. LPI-deepGBDT is compared with five classical LPI prediction models (LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM) under three cross validations on lncRNAs, proteins, lncRNA–protein pairs, respectively. It obtains the best average AUC and AUPR values under the majority of situations, significantly outperforming other five LPI identification methods. That is, AUCs computed by LPI-deepGBDT are 0.8321, 0.6815, and 0.9073, respectively and AUPRs are 0.8095, 0.6771, and 0.8849, respectively. The results demonstrate the powerful classification ability of LPI-deepGBDT. Case study analyses show that there may be interactions between GAS5 and Q15717, RAB30-AS1 and O00425, and LINC-01572 and P35637.

Conclusions

Integrating ensemble learning and hierarchical distributed representations and building a multiple-layered deep architecture, this work improves LPI prediction performance as well as effectively probes interaction data for new lncRNAs/proteins.

Peer Review reports

Introduction

Long noncoding RNAs (lncRNAs) are a class of important noncoding RNAs with the length more than 200 nucleotides. The class of RNAs have been reported to have dense associations with multiple biological processes including RNA splicing, transcriptional regulation, and cell cycle [1,2,3]. More importantly, the mutations and dysregulations of lncRNAs have important affects on multiple cancers [4, 5], for instance, lung cancer [6], colon cancer [7], and prostate cancer [8]. For example, lncRNAs UCA1, PCA3, and HOTAIR have been used as possible biomarkers of bladder cancer detection, prostate cancer aggressiveness, and hepatocellular carcinoma recurrence, respectively [9,10,11]. Although lncRNAs have been intensively investigated, functions and molecular mechanisms of lncRNAs still largely remain elusive [2, 12]. Recent researches have revealed that lncRNAs densely link to the corresponding binding-proteins. Therefore, the identification of the binding proteins for lncRNAs is urgent for better understanding the biological functions and molecular mechanisms of lncRNAs [1].

Although wet experiments for lncRNA–protein Interaction (LPI) discovery have been designed, computational methods are appealing to infer the relevances between lncRNAs and proteins [13]. The computational methods can be roughly divided into two categories: network-based methods and machine learning-based methods. Network-based LPI inference methods integrated various biological data and designed network propagation methods to find potential LPIs in the heterogeneous lncRNA–protein network. For example, Li et al. [14] proposed a random walk with restart-based LPI prediction model. Zhou et al. [15] took miRNAs as mediators to predict LPIs in a heterogeneous network (LPI-HNM). Yang et al. [16] used the HeteSim algorithm to compute the associated scores between lncRNAs and proteins. Zhao et al. [17], Ge et al. [18], and Xie et al. [19] explored a few bipartite network projection-based recommendation techniques to compute the interaction probabilities between lncRNAs and proteins. Zhang et al. [20] explored a novel LPI prediction framework combining a linear neighborhood propagation algorithm. Zhou et al. [21] combined similarity kernel fusion and Laplacian regularized least squares to find unobserved LPIs (LPI-SKF).

Machine learning-based LPI inference methods characterized the biological features of lncRNAs and proteins and exploited machine learning algorithms to probe LPI candidates [22]. Machine learning-based LPI prediction methods contain matrix factorization techniques and ensemble learning techniques [23]. Matrix factorization-based LPI prediction approaches used various matrix factorization techniques. Liu et al. [24] identified new LPIs combing neighborhood regularized logistic matrix factorization. Zhao et al. [25] inferred LPI candidates combining the neighborhood regularized logistic matrix factorization model and random walk. Zhang et al. [26] proposed a graph regularized nonnegative matrix factorization method to uncover unobserved LPIs.

Ensemble learning-based LPI inference methods utilized diverse ensemble techniques. Zhang et al. [27] exploited an ensemble learning model to discover the interactions between lncRNAs and proteins. Liu et al. [24] designed three ensemble strategies to predict LPIs based on support vector machine, random forest and extreme gradient boosting, respectively. Deng et al. [1] extracted HeteSim features and diffusion features of lncRNAs and proteins and constructed a gradient tree boosting-based LPI prediction algorithm (PLIPCOM). Fan and Zhang [28] explored a stacked ensemble-based LPI classification model via logistical regression (LPI-BLS). Deng et al. [29] proposed a gradient boosted regression tree for finding possible LPIs. Wekesa et al. [30] designed a categorical boosting-based LPI discovery framework (LPI-CatBoost). In addition, deep learning (such as deep graph neural network [31]) is increasingly developed to identify LPI candidates.

Computational methods effectively identified potential LPIs. However, there are a few problems to solve. First, the majority of computational models were evaluated on one dataset, which may result in predictive bias. Second, they were not used to infer potential proteins (or lncRNAs) associated with a new lncRNA (or protein). Finally, their prediction performance need to further improve.

To solve the above problems, in this study, inspired by Gradient Boosting Decision Trees (GBDT) provided by Feng et al. [32], we exploit a multiple-layer Deep structure with GBDT to predict unobserved LPIs (LPI-deepGBDT). First, five LPI datasets are constructed. Second, lncRNA and protein features are extracted by Pyfeat and BioProt, respectively. Third, a feature vector is built to represent an lncRNA–protein pair. Finally, a multiple-layer deep architecture integrating tree ensembles and hierarchical distributed representations is developed to classify lncRNA–protein pairs.

The remaining of this manuscript is organized as follows. “Materials and methods” section describes data resources and the LPI-deepGBDT framework. “Results” section illustrates the results from a series of experiments. “Discussion and further research” section discusses the LPI-deepGBDT method and provides directions for further research.

Materials and methods

Data preparation

In this manuscript, we collect three human LPI datasets and two plant LPI datasets. Dataset 1 provided by Li et al [14] contains 3,487 LPIs from 938 lncRNAs and 59 proteins. 3,479 LPIs between 935 lncRNAs and 59 proteins are finally obtained by removing the lncRNAs without sequence information in the NONCODE [33], NPInter [34] and UniProt [35] databases.

Dataset 2 build by Zheng et al. [36] contains human 4,467 LPIs between 1,050 lncRNAs and 84 proteins. 3,265 LPIs from 885 lncRNAs and 84 proteins are extracted after removing the lncRNAs without any sequence information. Dataset 3 constructed by Zhang et al. [20] contains 4,158 LPIs between 990 lncRNAs and 27 proteins.

Datasets 4 provides 948 Arabidopsis thaliana LPIs from 109 lncRNAs and 35 proteins. Dataset 5 provides 22,133 Zea mays LPIs from 1,704 lncRNAs and 42 proteins. The sequence information of two entities is downloaded from the PlncRNADB database [37] and LPIs are extracted at http://bis.zju.edu.cn/PlncRNADB/. The details are described in Table 1.

Table 1 The statistics of LPI information

We denote an LPI network via a matrix Y:

$$\begin{aligned} {y_{ij}} = \left\{ {\begin{array}{*{20}{l}} {1,{\mathrm {\, if \,\,lncRNAs \,\,}}{l_i}{\mathrm {\,\, interacts \,\,with \,\,protein\,\, }}{p_j}}\\ {0,{\mathrm { \, otherwise}}} \end{array}} \right. \end{aligned}$$
(1)

Overview of LPI-deepGBDT

In this study, we develop a feed-forward deep framework to infer new LPIs. Figure 1 describes the flowchart of LPI-deepGBDT. As shown in Fig. 1, the LPI-deepGBDT framework consists of three main processes after LPI datasets are built. (1) Feature extraction. Pyfeat [38] and BioProt [39] are used to extract the original features for lncRNAs and proteins. (2) Feature selection. The lncRNA and protein features are reduced into two d-dimensional vector based on dimensional reduction analysis with Principle Component Analysis (PCA). The two vectors are then concatenated to depict lncRNA–protein pairs. (3) Classification. A multiple-layer deep structure, composed of forward mapping and inverse mapping, is developed to classify lncRNA–protein pairs.

Fig. 1
figure 1

The flowchart of the LPI-deepGBDT framework. (1) Feature selection. (2) Dimension reduction. (3) Classification

Feature extraction

Feature extraction of lncRNAs

Pyfeat [38] is widely applied to generate numerical features via sequence information. In this study, we use Pyfeat to obtain lncRNA features and represent an lncRNA as a 3, 051-dimensional vector. The details are shown in Table 2.

Table 2 The lncRNA features by Pyfeat

Feature extraction of proteins

BioProt [39] utilizes various information to represent a protein. In this study, we use BioProt to obtain protein features and represent each protein as a 9,890-dimensional vector. The details are shown in Table 3.

Table 3 The protein features by BioProt

Dimension reduction

The feature dimensions of lncRNAs and protein are reduced based on PCA, respectively. Two d-dimensional feature vectors are obtained and concatenated as a 2d-dimensional vector \(\varvec{x}\) applied to represent an lncRNA–protein pair.

LPI prediction framework

Problem description

For a given LPI dataset \(D=(\varvec{X},\varvec{Y})\), where \((\varvec{x},\varvec{y})\) represents an lncRNA–protein pair (a training example), \(\varvec{x}\in \varvec{X}\) denotes a 2d-dimensional LPI feature vector and \(\varvec{y}\in \varvec{Y}\) denotes its label, we aim to classify unknown lncRNA–protein pairs.

For a feed-forward deep architecture with one original input layer, one output layer and (m-1) intermediate layers, suppose that \(\varvec{o}_i\) (\(i\in \{0,1,2, \cdots , m\}\)) denotes the output in the i-th layer. For an lncRNA–protein pair \((\varvec{x}, \varvec{y})\), we want to learn a mapping \({F_i}\) based on GBDT to minimize the empirical loss L between the desired output \(\varvec{y}\) and the final real output \(\varvec{o}_m\) on the training data.

Gradient boosting decision trees

GBDT can generate highly robust, interpretable and competitive classification procedures, especially for exploiting less than clean data [29, 40, 41]. For an lncRNA–protein pair \((\varvec{x},\varvec{y})\), an estimator \(f(\varvec{x})\) denotes an approximate function response to the label \(\varvec{y}\), the GBDT model iteratively builds K different individual decision tree \(\{g(\varvec{x}; \alpha _1), \dots , g(\varvec{x};\alpha _K)\}\) using the training data \(D=(\varvec{X},\varvec{Y})\). And \(f(\varvec{x})\) can be denoted as an expansion of individual decision tree \(g(\varvec{x};\alpha _k)\) by Eq. (2).

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {f(\varvec{x}) = \sum \limits _{k = 1}^K {{f_k}(\varvec{x}) = \sum \limits _{k = 1}^K {{\beta _k}g(\varvec{x};{\alpha _k}} )} }\\ {g(\varvec{x};{\alpha _k}) = \sum \limits _{j = 1}^J {{\gamma _{ik}}I(\varvec{x} \in } {R_{ik}})} \end{array}} \right. \end{aligned}$$
(2)

where each tree splits the input space into N disjoint regions \(\{R_{1k},\cdots , R_{jk}\}\) and calculates a constant value \(\gamma _{ik}\) for the region \(R_{jk}\) where \(I = 1\,\,if\,\,\varvec{x} \in {R_{jk}};\,I = 0,\,otherwise\). \(f_k(\varvec{x})\) denotes an addition function combined from the first decision tree to the k-th decision tree. The parameters \(\alpha _{k}\) denotes the mean values of partition locations and the terminal leaf nodes for each partitioning variables in the k-th decision tree. The parameters \(\beta _{k}\) denotes the weights used to determine how to effectively integrate the prediction results from individual decision trees when the leaf nodes of each collection are known. The two parameters \(\alpha _{k}\) and \(\beta _k\) can be estimated by minimizing a loss function \(L(\varvec{y},f(\varvec{x}))\) by Eq. (3).

$$\begin{aligned} \begin{array}{l} ({\alpha _k},{\beta _k}) = \mathop {\arg \min }\limits _{\alpha ,\beta } \sum \limits _{i = 1}^N {L({\varvec{y}_i},\,\,{f_{k - 1}}({\varvec{x}_i}) + \beta g({\varvec{x}_i};\alpha ))} \\ \qquad \quad \,\,\,\, = \mathop {\arg \min }\limits _{\alpha ,\beta } \sum \limits _{i = 1}^N {L({\varvec{y}_i},\,\,{f_{k - 1}}({\varvec{x}_i}) + \beta \sum \limits _{j = 1}^J {{\gamma _j}I({\varvec{x}_i} \in } {R_j}))} \end{array} \end{aligned}$$
(3)

and

$$\begin{aligned} \begin{array}{l} {f_k}(\varvec{x}) = {f_{k - 1}}(\varvec{x}) + {\beta _k}g(\varvec{x};{\alpha _k}) = {f_{k - 1}}(\varvec{x}) + {\beta _k}\sum \limits _{j = 1}^J {{\gamma _{jk}}I(\varvec{x} \in } {R_{jk}}) \end{array} \end{aligned}$$
(4)

To solve the model (3), Friedman [42] proposed a gradient boosting approach. First, the parameters \(\alpha _m\) can be estimated based on least square error:

$$\begin{aligned} \begin{array}{l} {\alpha _k} = \mathop {\arg \min }\limits _{\alpha ,\beta } \sum \limits _{i = 1}^N {[{{{\tilde{y}}}_{ik}} - \beta g({\varvec{x}_i};} \alpha ){]^2} = \mathop {\arg \min }\limits _{\alpha ,\beta } \sum \limits _{i = 1}^N {[{{{\tilde{y}}}_{ik}} - \beta \sum \limits _{j = 1}^J {{\gamma _j}I({\varvec{x}_i} \in } {R_j})} {]^2} \end{array} \end{aligned}$$
(5)

where \({{{{\tilde{y}}}_{im}}}\) denotes the gradient and is defined by Eq. (6).

$$\begin{aligned} {{{\tilde{y}}}_{ik}} = - {\left[\frac{{\partial L({\varvec{y}_i},f({\varvec{x}_i}))}}{{\partial f({\varvec{x}_i})}}\right]_{f(\varvec{x}) = {f_{k - 1}}(\varvec{x})}} \end{aligned}$$
(6)

The parameters \(\beta _k\) can be determined by Eq. (7).

$$\begin{aligned} \begin{array}{l} {\beta _k} = \mathop {\arg \min }\limits _\beta \sum \limits _{i = 1}^N {L({\varvec{y}_i},{f_{k - 1}}({\varvec{x}_i}) + \beta g({\varvec{x}_i};{\alpha _k}))} \\ \quad \,\,= \mathop {\arg \min }\limits _{\beta } \sum \limits _{i = 1}^N {L({\varvec{y}_i},{f_{k - 1}}({\varvec{x}_i}) + \beta \sum \limits _{j = 1}^J {{\gamma _{jk}}I({\varvec{x}_i} \in } {R_{jk}}))} \end{array} \end{aligned}$$
(7)

The estimator \(f_k(\varvec{x})\) for the k-th regression tree can be updated by Eq. (8)

$$\begin{aligned} f_k(\varvec{x})=f_{k-1}(\varvec{x})+\beta _{k}g(\varvec{x},\alpha _k) \end{aligned}$$
(8)

The final estimator \(f(\varvec{x})\) can be obtained by Eq. (9)

$$\begin{aligned} f(\varvec{x}) = \sum \limits _{k = 1}^K {{f_k}} (\varvec{x}) \end{aligned}$$
(9)

The gradient boosting approach calculates the optimal values of the parameters \(\alpha _m\) via minimizing the least square function defined by Eq. (5). The parameters \(\beta _m\) can be solved by Eqs. (5) and (7). And the GBDT algorithm is described as Algorithm 1.

figure a

The multi-layered deep architecture with GBDT

We exploited a multi-layered deep architecture with GBDT to classify unknown lncRNA–protein pairs. Firstly, m gradient boosting decision trees are initialized. Initial forward mapping, inverse mapping, and output are then computed. Second, pseudo-label in the m-th layer is obtained based on the initialized output and real label. Third, the forward mapping for each regression tree is iteratively updated based on the computed pseudo-label at the last iteration. Fourth, the inverse mapping is iteratively learned based on the achieved forward mapping at the last iteration. Finally, the final label is output after m iterations.

Phase I: Initialize GBDT

It is very difficult to design a random tree structure based on the distribution from all potential tree configurations. Therefore, multiple Gaussian noise data are injected to the output in all intermediate layers. Given a deep structure with m layers, the initial forward mapping \(F_i^{0}\) (\(i\in \{1,2,..., m\}\)) and the inverse mapping \(G_i^{0}\) (\(i\in \{2,3, ..., m\}\)) are obtained by a few very tiny trees, where index 0 represents the tree structures achieved in the initialization procedure. In addition, the initial output \(\varvec{o}_0\) is set as X and \(\varvec{o}_i=F_i^{0}(\varvec{o}_{i-1})\) (\(i\in \{1,2,..., m\}\)).

The iterations are updated based on the learned forward mappings and inverse mappings. At each iteration t, we conduct Phases II-IV.

Phase II: Compute the pseudo-label in the m -th layer

The pseudo-label in the m-th layer can be computed based on the final output \(\varvec{o}_m\) and the real label \(\varvec{y}\), \(\alpha\) is the learning rate by Eq. (10)

$$\begin{aligned} \varvec{p}_m^{t} = {\varvec{o}_m} - \alpha \frac{{\partial L({\varvec{o}_m},\varvec{y})}}{{\partial {\varvec{o}_m}}} \end{aligned}$$
(10)

Phase III: Forward mapping

At the t-th iteration, during the forward mapping, \(F_i^{t}\) for each regression tree in a GBDT is first initialized by \(F_i^{t}=F_i^{t-1}\) and updated based on a pseudo-labels \(\varvec{p}_{i-1}^{t}\) with \(\varvec{p}_{i-1}^{t}=G_i(\varvec{p}_i^{t})\). The details are described as follows.

For each regression tree in a GBDT, we define a reconstruction loss function as Eq. (11).

$$\begin{aligned} L_i^{forw}=||F_i^{t}(\varvec{o}_{i-1})-\varvec{p}_i^{t}|| \end{aligned}$$
(11)

The pseudo-residuals for each tree can be computed by Eq. (12).

$$\begin{aligned} {\varvec{r}_k^{forw}} = - \frac{{\partial {L_i^{forw}}}}{{\partial F_i^{t}({\varvec{o}_{i - 1}})}} \end{aligned}$$
(12)

When the pseudo-label in each layer is calculated, each \(F_i^{t-1}\) can implement a gradient ascent towards its pseudo-residual by Eq. (12).

Each regression tree \(g_k\) is fitted to \(\varvec{r}_k^{forw}\) based on the training set (\(\varvec{o}_{i-1},\varvec{r}_k^{forw}\)) and the forward mapping \(F_i^{t}\) for each tree can be updated by Eq. (13).

$$\begin{aligned} F_i^{t}=F_i^{t}+\gamma g_k \end{aligned}$$
(13)

Finally, we obtain the output for each layer by the forward mapping by Eq. (14).

$$\begin{aligned} \varvec{o}_i=F_i^{t}(\varvec{o}_{i-1}) \end{aligned}$$
(14)

The forward mapping procedures are described as Algorithm 2.

figure b

In this phase, we use a bottom up update technique, that is, \(F_i\) will be updated before \(F_j\) when \(i<j\). In addition, each \(F_i\) can run multiple rounds of additive boosting operations towards its current pseudo-label.

Phase IV: Inverse mapping

At the t-th iteration, for each decision tree, given the forward mapping \(F_i^{t - 1}\) learned from the (t-2)-th iteration, we intend to achieve an “pseudo-inverse” mapping \(G_i^{t}\) paired with each \(F_i^{t - 1}\) satisfying \(G_i^t(F_i^{t - 1}({\varvec{o}_{i - 1}})) \approx {\varvec{o}_{i - 1}}\) based on the following expected value of the reconstructed loss function by Eq. (15):

$$\begin{aligned} {\hat{G}}_i^{t} = \mathop {\arg \min }\limits _{G_i^{t}} {\mathrm{E}_x}[{L_i^{inv}}({\varvec{o}_{i - 1}},G_i^{t}(F_i^{t - 1}({\varvec{o}_{i - 1}})))] \end{aligned}$$
(15)

where \(L_i^{inv}\) denotes the reconstructed loss in the i-th layer.

To build a more robust and generative model, random noises \(\sigma\) are injected into the output in all intermediate layers:

$$\begin{aligned} \varvec{o}_{i-1}^{noise}=\varvec{o}_{i-1}^{noise}+\epsilon , \epsilon \sim \mathrm{N}(\varvec{0},diag({\sigma ^2})) \end{aligned}$$
(16)

For each regression tree \(g_k\) in a GBDT, the reconstructed error can be computed by Eq. (17):

$$\begin{aligned} {L_i^{inv}} = ||G_i^t(F_i^{t - 1}({\varvec{o}_{i - 1}^{noise}} )) - ({\varvec{o}_{i - 1}^{noise}} )|| \end{aligned}$$
(17)

Based on the noise injection, each \(G_i^{t-1}\) follows a gradient ascent towards the pseudo-residuals by Eq. (18)

$$\begin{aligned} {\varvec{r}_k^{inv}} = - \frac{{\partial L_i^{inv}}}{{\partial G_i^t(F_i^{t - 1}(\varvec{o}_{i - 1}^{noise}))}} \end{aligned}$$
(18)

where \({\varvec{r}_k^{inv}}\) denotes the pseudo-residuals of the k-th regression tree during the inverse mapping. For each regression tree \(g_k\) in GBDT, we fit it to \(\varvec{r}_k\) via the training set \((F_i^{t-1}(\varvec{o}_{j-1}^{noise}),\varvec{r}_k^{inv})\) and then update \(G_i^{t}\) by Eq. (19).

$$\begin{aligned} G_i^{t}=G_i^{t}+\gamma g_k \end{aligned}$$
(19)

Finally, the pseudo-label in each intermediate layer can be propagated from the final layer to the first layer by Eq. (20):

$$\begin{aligned} \varvec{p}_{i-1}^{t}=G_i^{t}(\varvec{p}_i^{t}) \end{aligned}$$
(20)

For all intermediate layers and the final output layer (\(i \in \{m,m-1,...,2\})\), the inverse mapping procedures are described as Algorithm 3.

figure c

We can obtain the inverse mapping \(G_i^{t}\) for the final output layer and all intermediate layers and the pseudo-labels \(\varvec{p}_{i}^{t}\) for the first layer and all the intermediate layers. After finishing the t-th iteration, we continue the \((t+1)\)-th iteration to update \(F_i\) and \(G_i\).

During LPI prediction, a linear classifier \(\varvec{Y}=\varvec{XW}^{T}+b\) is applied to the forward mapping in the m-th layer. There are two main advantages. First, the m-1 layers can re-represent the LPI features as linearly separable as possible. Second, the corresponding inverse mapping in the m-th layer does not have to be computed because the pseudo-label in the (m-1)-th layer can be obtained based on the gradient of global loss related to the output in the (m-2)-th layer.

Results

The experiments is mainly explored to empirically examine if the proposed LPI-deepGBDT method can effectively predict new LPIs.

Evaluation metrics

The six measurements are utilized to evaluate the performance of LPI-deepGBDT: precision, recall, accuracy, F1-score, AUC and AUPR. For the six evaluation criteria, higher values depict better performance [43]. The experiments are repeatedly implemented for 20 times. The average performance for the 20 rounds is taken as the final performance. The six measurements are defined by Eqs. (21)–(24).

$$\begin{aligned} Precision= & {} \frac{TP}{TP+FP} \end{aligned}$$
(21)
$$\begin{aligned} Recall= & {} \frac{TP}{TP+FN} \end{aligned}$$
(22)
$$\begin{aligned} Accuracy= & {} \frac{TP +TN}{TN +FN +TP +FP} \end{aligned}$$
(23)
$$\begin{aligned} F1 - Score= & {} \frac{2TP}{2TP+FP+FN} \end{aligned}$$
(24)

where TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives, respectively. Precision denotes the ratio of correctly predicted positive samples among all predicted positive samples. Recall represents the ratio of correctly predicted positive samples among all real positive samples. Accuracy denotes the ratio of correctly predicted positive and negative samples among all samples. F1-Score is harmonic mean between precision and recall. Area Under receiver operating Characteristic Curve (AUC) is used to measure the trade-off between TP ratio and FP ratio. Area Under Precision-Recall curve (AUPR) is applied to evaluate the trade-off between precision and recall.

Experimental settings

The parameters in Pyfeat are set as: kgap=5, ktuple=3, optimum=1, pseudo=1, zcurve=1, gc=1, skew=1, atgc=1, monoMono=1, monoDi=1, monoTri=1, diMono=1, diDi=1, diTri=1, triMono=1, and triDi=1. All parameters in BioProt and LPI-SKF are the corresponding values provided by refs. [39] and [21], respectively. The deep GBDT architecture we used is (input-16-16-output). The parameters in the remaining methods are set the values when the corresponding methods obtain the best performance. The details are described in Table 4.

Table 4 Parameter settings

Therefore, we select two 100-dimensional vectors to represent lncRNA and protein, respectively. Three 5-fold Cross Validations (CVs) are carried out to evaluate the performance of LPI-deepGBDT.

5-fold CV on lncRNAs (CV1): 80% of lncRNAs are extracted as train set and the remaining is test set in each round.

5-fold CV on proteins (CV2): 80% of proteins are extracted as train set and the remaining is test set in each round.

5-fold CV on lncRNA–protein pairs (CV3): 80% of lncRNA–protein pairs are extracted as train set and the remaining is test set in each round.

The three CVs refer to potential LPI identification for (1) a new (unknown) lncRNA without interaction information, (2) a new protein without interaction information, and (3) lncRNA–protein pairs, respectively.

Comparison with five state-of-the-art LPI prediction methods

We compare the proposed LPI-deepGBDT framework with five classical LPI identification models to measure the classification performance and robustness of LPI-deepGBDT, that is, LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF and LPI-HNM. The number of negative samples is set as the same as positive samples. The best performance is illustrated in boldface in each row in Tables 5, 6 and 7.

Table 5 gives the comparative results of the five LPI identification models in terms of the six measurements under CV1. It can be observed that LPI-deepGBDT achieves better average recall, accuracy, F1-score, AUC and AUPR than LPI-BLS, LPI-CatBoost, PLIPCOM, and LPI-HNM on five LPI datasets. For example, LPI-deepGBDT obtains the best average F1-score value of 0.7586, 8.99%, 9.83%, 1.61%, 22.70% and 8.37% superior than LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM, respectively. More importantly, it calculates the best AUC value of 0.8321, 1.63%, 8.32%, 2.37%, 0.02% and 6.26% better than the above five models, respectively. It also achieves the best average AUPR of 0.8095, 1.85%, 5.53%, 0.77%, 0.02% and 0.24% higher than the five methods, respectively.

LPI-BLS, LPI-CatBoost, PLIPCOM and LPI-HNM are four state-of-the-art supervised learning-based LPI prediction methods and LPI-deepGBDT computes better performance than them. The results suggest the powerful classification ability of LPI-deepGBDT under CV1. More importantly, although LPI-deepGBDT computes slightly lower precision than LPI-SKF, other five measurements are better than LPI-SKF. LPI-SKF is one network-based LPI inference algorithm. The type of methods have one limitation, that is, they can not be applied to predict possible interaction information for an orphan lncRNA. Therefore, LPI-deepGBDT is appropriate to prioritize underlying proteins associated with a new lncRNA.

Table 5 The performance of five LPI prediction methods on CV1

Table 6 depicts the performance of LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-HNM, and LPI-deepGBDT under CV2. The results show that the performance of LPI-deepGBDT is slightly lower than LPI-HNM. Under CV2, 80% proteins are extracted as training set and the remaining is test set in each round. That is, there will be relatively higher proteins for which association information is masked, thereby resulting in the reduction of samples and affecting the performance of LPI-deepGBDT. Compared to other five methods, LPI-HNM may be relatively robust to data abundant level when predicting possible lncRNAs for a new protein.

More importantly, LPI-deepGBDT computes the best average AUC and AUPR in comparing to LPI-BLS, LPI-CatBoost, and PLIPCOM. For example, LPI-deepGBDT obtains the best average AUC of 0.6815, 21.97%, 9.24%, and 4.01% superior than LPI-BLS, LPI-CatBoost, and PLIPCOM, respectively. LPI-deepGBDT achieves the best average AUPR of 0.6771, 15.74%, 10.37%, and 6.78% better than the above three methods, respectively. AUC and AUPR are two more important evaluation criteria compared to other four measurements. LPI-deepGBDT outperforms LPI-BLS, LPI-CatBoost, and PLIPCOM in terms of AUC and AUPR. The results suggest that LPI-deepGBDT is one appropriate LPI prediction algorithm.

In particular, LPI-BLS is an ensemble learning-based model. LPI-deepGBDT significantly outperforms LPI-BLS based on AUC and AUPR. The results illustrate that LPI-deepGBDT may obtain better ensemble performance. In addition, LPI-CatBoost and PLIPCOM are two categorical boosting techniques. LPI-deepGBDT, integrating the idea of deep architecture, obtains better performance than the two methods. It shows that deep learning may more effectively learn the relevances between lncRNAs and proteins. Although LPI-SKF computes better AUPR than LPI-deepGBDT, LPI-SKF is a network-based model. Network-based methods can not reveal association information for an orphan protein. In summary, LPI-deepGBDT may be applied to infer possible interacting lncRNAs for a new protein.

Table 6 The performance of five LPI prediction methods on CV2

The experimental results under CV3 are shown in Table 7. The comparative results demonstrate that LPI-deepGBDT computed the best average precision, recall, accuracy, F1-score, AUC, and AUPR over all datasets. For example, LPI-deepGBDT obtains the best average F1-score value of 0.8429, 14.83%, 10.77%, 3.10%, 16.73% and 18.43% superior than LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF and LPI-HNM, respectively. More importantly, it calculates the best AUC value of 0.9073, 4.93%, 11.21%, 3.32%, 0.12% and 14.49%, better than the above five models, respectively. It also achieves the best average AUPR of 0.8849, 5.82%, 8.84%, 2.59%, 2.62% and 9.13% higher than the five methods, respectively. The results characterize the superior classification performance of LPI-deepGBDT. Therefore, LPI-deepGBDT can precisely discover the potential relationships between lncRNAs and proteins based on known association information.

In addition, we investigate the performance computed by all six LPI prediction methods under the three different cross validations. The results from Tables 5, 6 and 7 show that LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-deepGBDT achieve much better performance under CV3 than CV1, followed by CV2, regardless of precision, recall, accuracy, F1-score, AUC or AUPR. Under CV3, cross validations are conducted on all lncRNA–protein pairs and 80% lncRNA–protein pairs are used to train the model and the remaining 20% lncRNA–protein pairs are applied to test the model. However, under CV1 or CV2, cross validations are implemented on lncRNAs or proteins, that is, 80% lncRNAs or proteins are applied to train the model and the remaining 20% lncRNAs or proteins are used to test the model. CV3 may provide more LPI information relative to CV1 and CV2. The result suggest that abundant data contribute to improve the prediction performance of LPI identification models.

Table 7 The performance of five LPI prediction methods on CV3

Case study

In this section, we aim to mine possible association data for a new lncRNA/protein or based on known LPIs.

Identifying potential proteins for a new lncRNA

RN7SL1 is an endogenous RNA. The lncRNA is usually protected by RNA-binding protein SRP9/14. Its increase can alter the stoichiometry with SRP9/14 and thus produce unshielded RN7SL1 in stromal exosomes. After exosome transfer to breast cancer cells, unshielded RN7SL1 can activate breast cancer RIG-I and promote tumor growth, metastasis, and therapy resistance [44]. Hepatocellular carcinoma patients with higher RN7SL1 concentrations also show lower survival rates. RN7SL1 may enhance hepatocellular carcinoma cell proliferation and clonogenic growth [45].

In this section, we mask all interaction information for RN7SL1 and want to infer possible proteins interacting with the lncRNA. The experiments are repeated for 10 times and the interaction probabilities between RN7SL1 and other proteins are averaged over the 10 time results. The predicted top 5 proteins interacting with RN7SL1 on human LPI datasets are described in Table 8. In Dataset 1, we can observe that RN7SL1 is predicted to interact with Q15465. Q15465 displays a cholesterol transferase and autoproteolysis activity in the reticulum endoplasmic. Its N-product is a morphogen required for diverse patterning events during development. It induces ventral cell fate in somites and the neural tubes. It is required for axon guidance and densely related to the anterior-posterior axis patterning in the developing limb bud [35]. In the dataset, RN7SL1 may associate with 59 proteins. In other two datasets, there does not exist any associated lncRNAs for Q15465. Although the interaction between RN7SL1 and Q15465 hasn’t been validated, among all possible associated 59 proteins, the protein is ranked as 4, 6, 8, 9, and 14 by LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-HNM, and LPI-BLS, respectively. Therefore, the association between RN7SL1 and Q15465 need further validation.

In Dataset 2, we predict that Q13148, P07910, and Q9NZI8 may interact with RN7SL1. The interaction between Q9NZI8 and RN7SL1 is known in Dataset 3. Q13148 is a RNA-binding protein involved in various procedures in RNA biogenesis and processing. The protein controls the splicing in numerous non-coding and protein-coding RNAs, for example, proteins involved in neuronal survival and mRNAs encoding proteins related to neurodegenerative diseases. It plays important roles in maintaining mitochondrial homeostasis, mRNA stability and circadian clock periodicity, the normal skeletal muscle formation and regeneration. In Dataset 2, RN7SL1 may associate with 84 proteins. Among the 84 underlying proteins for RN7SL1, the rankings of Q13148 predicted by LPI-deepGBDT LPI-CatBoost, PLIPCOM, LPI-SKF, LPI-BLS, and LPI-HNM are 2, 3, 1, 3, 2, and 6, respectively. That is, all the six LPI identification models predict that there may be interaction between Q13148 and RN7SL1. Therefore, we infer that Q13148 may possibly interact with RN7SL1.

More importantly, in Dataset 2, P07910 binds to pre-mRNA and regulates the stability and translation level of bound mRNA molecules. The protein is involved in the early procedures of spliceosome assembly and pre-mRNA splicing. In other two human LPI datasets, there are no any known associated lncRNAs for P07910. Among 84 potential associated proteins for RN7SL1, P07910 is ranked as 3, 7, 8, 9, 11, and 9 by LPI-deepGBDT, LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF, and LPI-HNM, respectively. The ranking are relatively higher. Therefore, we predict that P07910 may associate with RN7SL1.

In Dataset 3, we observe that Q9UKV8 and Q9Y6M1 may interact with RN7SL1. The interactions between RN7SL1 and the two proteins can be retrieved in Dataset 1. That is, the predicted top 5 interaction data by LPI-deepGBDT can be validated by publications. In summary, the results from case analyses based on association prediction for a new lncRNA suggest that LPI-deepGBDT can be utilized to identify new proteins associated with a new lncRNA.

Table 8 The predicted top 5 proteins interacting with RN7SL1

Finding potential lncRNAs interacting with a new protein

Q9UL18 is a protein required by RNA-mediated gene silencing. The protein can repress the translation of mRNAs complementary to them by binding to short RNAs or short interfering RNAs. It lacks endonuclease activity and thus can cleave target mRNAs. It is still required by transcriptional gene silencing of promoter regions complementary to bound short antigene RNAs [35]. In this section, we mask the interaction information for Q9UL18 and intend to find associated lncRNAs for the protein. The predicted top 5 lncRNAs on three human LPI dataset are shown in Table 9.

In Datasets 1-3, Q9UL18 may interact with 935, 885, and 990 lncRNAs. It can be seen that all the predicted top 5 interactions on each dataset are validated as known LPIs. The results suggest that LPI-deepGBDT can be applied to prioritize possible lncRNAs for a new protein.

Table 9 The predicted top 5 lncRNAs interacting with Q9UL18

Finding new LPIs based on known LPIs

We further infer new LPIs based on LPI-deepGBDT. We rank all lncRNA–protein pairs based on the computed average interaction probabilities. Figures 2, 3, 4, 5 and 6 give the predicted 50 LPIs with the highest interaction scores. In the five figures, black dotted lines and solid lines represent unknown and known LPIs obtained from LPI-deepGBDT, respectively. Gold ovals denote proteins, deep sky blue rounded rectangles denote RNA.

Fig. 2
figure 2

The predicted top 50 LPIs on Dataset 1

Fig. 3
figure 3

The predicted top 50 LPIs on Dataset 2

Fig. 4
figure 4

The predicted top 50 LPIs on Dataset 3

Fig. 5
figure 5

The predicted top 50 LPIs on Dataset 4

Fig. 6
figure 6

The predicted top 50 LPIs on Dataset 5

There are 55,165, 74,340, 26,730, 3,815, and 71,568 known and unknown lncRNA–protein pairs on given five datasets, respectively. We observe that unknown lncRNA–protein pairs between NONHSAT023366 (RAB30-AS1) and O00425, n378107 (NONHSAT007673, GAS5) and Q15717, NONHSAT143568 (LINC-01572) and P35637, AthlncRNA376 (TCONS_00057930) and O22823, and ZmalncRNA530 (TCONS_00007931) and C0PLI2, which are predicted to have the highest association scores on the five datasets, are ranked as 1, 3, 1, 6, and 113, respectively.

lncRNA GAS5 has close linkages with multiple complex diseases. The lncRNA is a repressor of the glucocorticoid receptors associated with growth arrest and starvation [46]. It is downregulated in breast cancer [47]. It cam also promote microglial inflammatory response in Parkinson’s disease [48], control apoptosis in non-small-cell lung cancer [49] and prostate cancer cell [50]. Its decreased expression indicates a poor prognosis in cervical cancer [51] and gastric cancer [52].

Q15717 increases the stability of mRNA and mediates the CDKN2A anti-proliferative activity and regulates p53/TP53 expression. It increases the leptin mRNA’s stability and is involved in embryonic stem cells differentiation. In dataset 2, GAS5 have been validated to interact with P35637, and Q13148. P35637 plays an important role in diverse cellular processes including transcription regulation, DNA repair and damage response, RNA transport, and RNA splicing. It helps RNA transport, mRNA stability and synaptic homeostasis in neuronal cells. Q13148 plays a crucial role in maintaining mitochondrial homeostasis. It participates in the formation and regeneration of normal skeletal muscle, negatively regulates the expression of CDK6. The three proteins are RNA-binding proteins and have in part similar biological functions. Therefore, we infer that Q15717 may be the corresponding protein of GAS5.

Discussion and further research

lncRNAs regulate many important biological processes. They have close relationships with multiple human complex diseases. However, most of them are not annotated because of the poor evolutionary conservation. Recent researches suggest that lncRNAs implement their functions by binding to the corresponding proteins. Therefore, it is a significant work to infer potential interactions between lncRNAs and proteins. Various computational methods were designed to identify new LPIs. These models improved LPI prediction and found many potential linkages between the two entities. The predicted LPIs with higher rankings are worthy of further biomedical experimental validation.

In this manuscript, we explore an LPI identification framework (LPI-deepGBDT) based on a feed-forward deep architecture with GBDTs. First, three LPI datasets and two plant datasets are retrieved. Second, the biological features of lncRNAs and proteins are selected via Pyfeat and BioProt, respectively. Third, the features are reduced based on dimensional reduction technique and concatenated to depict an lncRNA–protein pair. Finally, a multi-layered deep framework is developed to find the potential relationships between the two entities. We compare LPI-deepGBDT with five classical LPI discovery methods, LPI-BLS, LPI-CatBoost, PLIPCOM, LPI-SKF and LPI-HNM, on the five datasets under three cross validations. The results demonstrate the superior classification ability of LPI-deepGBDT. Case studies are further implemented to conduct interaction prediction for new lncRNAs (or proteins) or based on known LPIs.

LPI-deepGBDT computes the best performance on the collected five LPI datasets. It may be in large part due to the following features. First, LPI-deepGBDT fuses multiple biological features. Second, the constructed multi-layered deep framework with non-differentiable components helps to distributedly represent the outputs in intermediate layers. Thirdly, the update procedure for each intermediate layer can reduce the global loss by updating its pseudo-label and reducing the loss in the previous layer. Finally, the random noises added in the loss function can better map the neighbor training samples to right manifold.

In the future, we will collect multiple LPI datasets from different species to better mine the relevances between lncRNAs and proteins for different species. More importantly, we will develop more effective ensemble learning model to improve the performance of LPI prediction.

Availability of data and materials

Source codes and datasets are freely available for download at https://github.com/plhhnu/LPI-deepGBDT.

Abbreviations

LPI-deepGBDT:

Feed-forward deep architecture based on gradient boosting decision trees used to discover unobserved LPIs

LPI:

Long noncoding RNA–protein interaction

GBDT:

Gradient boosting decision trees

lncRNAs:

Long noncoding RNAs

CVs:

Cross validations

References

  1. Deng L, Wang J, Xiao Y, Wang Z, Liu H. Accurate prediction of protein-lncrna interactions by diffusion and hetesim features across heterogeneous network. BMC Bioinform. 2018;19(1):1–11.

    Article  CAS  Google Scholar 

  2. Liu Z-P. Predicting lncrna-protein interactions by machine learning methods: a review. Curr Bioinform. 2020;15(8):831–40.

    Article  CAS  Google Scholar 

  3. Chen X, Sun Y-Z, Guan N-N, Qu J, Huang Z-A, Zhu Z-X, Li J-Q. Computational models for lncrna function prediction and functional similarity calculation. Brief Funct Genom. 2019;18(1):58–82.

    Article  CAS  Google Scholar 

  4. Chen X, Yan CC, Zhang X, You Z-H. Long non-coding rnas and complex diseases: from experimental results to computational models. Brief Bioinform. 2017;18(4):558–76.

    CAS  PubMed  Google Scholar 

  5. Wang, W., Dai, Q., Li, F., Xiong, Y., Wei, D.-Q.: Mlcdforest: multi-label classification with deep forest in disease prediction for long non-coding rnas. Brief. Bioinform. (2020)

  6. Zhang X, Zhou Y, Mehta KR, Danila DC, Scolavino S, Johnson SR, Klibanski A. A pituitary-derived meg3 isoform functions as a growth suppressor in tumor cells. J Clin Endocrinol Metabol. 2003;88(11):5119–26.

    Article  CAS  Google Scholar 

  7. Pibouin L, Villaudy J, Ferbus D, Muleris M, Prospéri M-T, Remvikos Y, Goubin G. Cloning of the mrna of overexpression in colon carcinoma-1: a sequence overexpressed in a subset of colon carcinomas. Cancer Genet Cytogenet. 2002;133(1):55–60.

    Article  CAS  PubMed  Google Scholar 

  8. Cui, Z., Ren, S., Lu, J., Wang, F., Xu, W., Sun, Y., Wei, M., Chen, J., Gao, X., Xu, C., et al.: The prostate cancer-up-regulated long noncoding rna plncrna-1 modulates apoptosis and proliferation through reciprocal regulation of androgen receptor. In: Urologic Oncology: Seminars and Original Investigations, vol. 31, pp. 1117–1123. Elsevier (2013)

  9. Chen X, Yan G-Y. Novel human lncrna-disease association inference based on lncrna expression profiles. Bioinformatics. 2013;29(20):2617–24.

    Article  CAS  PubMed  Google Scholar 

  10. van Poppel H, Haese A, Graefen M, de la Taille A, Irani J, de Reijke T, Remzi M, Marberger M. The relationship between prostate cancer gene 3 (pca3) and prostate cancer significance. BJU Int. 2012;109(3):360–6.

    Article  PubMed  Google Scholar 

  11. Yang Z, Zhou L, Wu L-M, Lai M-C, Xie H-Y, Zhang F, Zheng S-S. Overexpression of long non-coding rna hotair predicts tumor recurrence in hepatocellular carcinoma patients following liver transplantation. Ann Surg Oncol. 2011;18(5):1243–50.

    Article  PubMed  Google Scholar 

  12. Wang, W., Guan, X., Khan, M.T., Xiong, Y., Wei, D.-Q.: Lmi-dforest: a deep forest model towards the prediction of lncrna-mirna interactions. Comput. Biol. Chem. 107406 (2020)

  13. Li Y, Sun H, Feng S, Zhang Q, Han S, Du W. Capsule-lpi: a lncrna-protein interaction predicting tool based on a capsule network. BMC Bioinform. 2021;22(1):1–19.

    Article  CAS  Google Scholar 

  14. Li, A., Ge, M., Zhang, Y., Peng, C., Wang, M.: Predicting long noncoding rna and protein interactions using heterogeneous network model. Biomed. Res. Int. 2015 (2015)

  15. Zhou Y-K, Shen Z-A, Yu H, Luo T, Gao Y, Du P-F. Predicting lncrna-protein interactions with mirnas as mediators in a heterogeneous network model. Front Genet. 2020;10:1341.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  16. Yang J, Li A, Ge M, Wang M. Relevance search for predicting lncrna-protein interactions based on heterogeneous network. Neurocomputing. 2016;206(19):81–8.

    Article  Google Scholar 

  17. Zhao Q, Yu H, Ming Z, Hu H, Ren G, Liu H. The bipartite network projection-recommended algorithm for predicting long non-coding rna-protein interactions. Mol Therapy-Nucleic Acids. 2018;13:464–71.

    Article  CAS  Google Scholar 

  18. Ge M, Li A, Wang M. A bipartite network-based method for prediction of long non-coding rna-protein interactions. Genom Proteom Bioinform. 2016;14(1):62–71.

    Article  Google Scholar 

  19. Xie G, Wu C, Sun Y, Fan Z, Liu J. Lpi-ibnra: long non-coding rna-protein interaction prediction based on improved bipartite network recommender algorithm. Front Genet. 2019;10:343.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Zhang W, Qu Q, Zhang Y, Wang W. The linear neighborhood propagation method for predicting long non-coding rna-protein interactions. Neurocomputing. 2018;273:526–34.

    Article  Google Scholar 

  21. Zhou Y-K, Hu J, Shen Z-A, Zhang W-Y, Du P-F. Lpi-skf: predicting lncrna-protein interactions using similarity kernel fusions. Front Genet. 2020;11:1554.

    Article  Google Scholar 

  22. Chen Y, Fu X, Li Z, Peng L, Zhuo L. Prediction of lncrna-protein interactions via the multiple information integration. Front Bioeng Biotechnol. 2021;9:60.

    Google Scholar 

  23. Peng L, Liu F, Yang J, Liu X, Meng Y, Deng X, Peng C, Tian G, Zhou L. Probing lncrna-protein interactions: data repositories, models, and algorithms. Front Genet. 2020;10:1346.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  24. Liu H, Ren G, Hu H, Zhang L, Ai H, Zhang W, Zhao Q. Lpi-nrlmf: lncrna-protein interaction prediction by neighborhood regularized logistic matrix factorization. Oncotarget. 2017;8(61):103975.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Zhao Q, Zhang Y, Hu H, Ren G, Zhang W, Liu H. Irwnrlpi: integrating random walk and neighborhood regularized logistic matrix factorization for lncrna-protein interaction prediction. Front Genet. 2018;9:239.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  26. Zhang T, Wang M, Xi J, Li A. Lpgnmf: predicting long non-coding rna and protein interaction using graph regularized nonnegative matrix factorization. IEEE/ACM Trans Comput Biol Bioinf. 2018;17(1):189–97.

    Article  Google Scholar 

  27. Zhang W, Yue X, Tang G, Wu W, Huang F, Zhang X. Sfpel-lpi: sequence-based feature projection ensemble learning for predicting lncrna-protein interactions. PLoS Comput Biol. 2018;14(12):1006616.

    Article  CAS  Google Scholar 

  28. Fan X-N, Zhang S-W. Lpi-bls: predicting lncrna-protein interactions with a broad learning system-based stacked ensemble classifier. Neurocomputing. 2019;370:88–93.

    Article  Google Scholar 

  29. Deng L, Yang W, Liu H. Predprba: prediction of protein-rna binding affinity using gradient boosted regression trees. Front Genet. 2019;10:637.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Wekesa JS, Meng J, Luan Y. Multi-feature fusion for deep learning to predict plant lncrna-protein interaction. Genomics. 2020;112(5):2928–36.

    Article  CAS  PubMed  Google Scholar 

  31. Shen, Z.-A., Luo, T., Zhou, Y.-K., Yu, H., Du, P.-F.: Npi-gnn: predicting ncrna-protein interactions with deep graph neural networks. Brief. Bioinform. (2021)

  32. Feng, J., Yang, Y., Zhou, Z.H.: Multi-layered gradient boosting decision trees (2018)

  33. Xie C, Yuan J, Li H, Li M, Zhao G, Bu D, Zhu W, Wu W, Chen R, Zhao Y. Noncodev4: exploring the world of long non-coding rna genes. Nucleic Acids Res. 2014;42(D1):98–103.

    Article  CAS  Google Scholar 

  34. Yuan J, Wu W, Xie C, Zhao G, Zhao Y, Chen R. Npinter v2. 0: an updated database of ncrna interactions. Nucleic Acids Res. 2014;42(D1):104–8.

    Article  CAS  Google Scholar 

  35. Consortium, U.: Uniprot: a worldwide hub of protein knowledge. Nucleic Acids Res. 2019;47(D1):506–15.

    Article  CAS  Google Scholar 

  36. Zheng X, Wang Y, Tian K, Zhou J, Guan J, Luo L, Zhou S. Fusing multiple protein–protein similarity networks to effectively predict lncrna-protein interactions. BMC Bioinform. 2017;18(12):11–8.

    Google Scholar 

  37. Bai Y, Dai X, Ye T, Zhang P, Yan X, Gong X, Liang S, Chen M. Plncrnadb: a repository of plant lncrnas and lncrna-rbp protein interactions. Curr Bioinform. 2019;14(7):621–7.

    Article  CAS  Google Scholar 

  38. Muhammod R, Ahmed S, Md Farid D, Shatabda S, Sharma A, Dehzangi A. Pyfeat: a python-based effective feature generation tool for dna, rna and protein sequences. Bioinformatics. 2019;35(19):3831–3.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Márquez, B., Amaya, J.C.: Bioprot contenedor autónomo de residuos biológicos. Revista colombiana de tecnologias de avanzada 1(33) (2019)

  40. Ding C, Wang D, Ma X, Li H. Predicting short-term subway ridership and prioritizing its influential factors using gradient boosting decision trees. Sustainability. 2016;8(11):1100.

    Article  Google Scholar 

  41. Shi Z, Chu Y, Zhang Y, Wang Y, Wei D-Q. Prediction of blood–brain barrier permeability of compounds by fusing resampling strategies and extreme gradient boosting. IEEE Access. 2020;9:9557–66.

    Article  Google Scholar 

  42. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 1189–1232 (2001)

  43. Jiao Y, Du P. Performance measures in evaluating machine learning based bioinformatics predictors for classifications. Quant Biol. 2016;4(4):320–30.

    Article  Google Scholar 

  44. Nabet BY, Qiu Y, Shabason JE, Wu TJ, Yoon T, Kim BC, Benci JL, DeMichele AM, Tchou J, Marcotrigiano J, et al. Exosome rna unshielding couples stromal activation to pattern recognition receptor signaling in cancer. Cell. 2017;170(2):352–66.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Tan C, Cao J, Chen L, Xi X, Wang S, Zhu Y, Yang L, Ma L, Wang D, Yin J, et al. Noncoding rnas serve as diagnosis and prognosis biomarkers for hepatocellular carcinoma. Clin Chem. 2019;65(7):905–15.

    Article  CAS  PubMed  Google Scholar 

  46. Kino T, Hurt DE, Ichijo T, Nader N, Chrousos GP. Noncoding rna gas5 is a growth arrest-and starvation-associated repressor of the glucocorticoid receptor. Sci Signal. 2010;3(107):8–8.

    Article  Google Scholar 

  47. Mourtada-Maarabouni M, Pickard M, Hedge V, Farzaneh F, Williams G. Gas5, a non-protein-coding rna, controls apoptosis and is downregulated in breast cancer. Oncogene. 2009;28(2):195–208.

    Article  CAS  PubMed  Google Scholar 

  48. Xu W, Zhang L, Geng Y, Liu Y, Zhang N. Long noncoding rna gas5 promotes microglial inflammatory response in parkinsons disease by regulating nlrp3 pathway through sponging mir-223-3p. Int Immunopharmacol. 2020;85:106614.

    Article  CAS  PubMed  Google Scholar 

  49. Shi X, Sun M, Liu H, Yao Y, Kong R, Chen F, Song Y. A critical role for the long non-coding rna gas5 in proliferation and apoptosis in non-small-cell lung cancer. Mol Carcinog. 2015;54(S1):1–12.

    Article  CAS  Google Scholar 

  50. Pickard M, Mourtada-Maarabouni M, Williams G. Long non-coding rna gas5 regulates apoptosis in prostate cancer cell lines. Biochimica et Biophysica Acta. 2013;1832(10):1613–23.

    Article  CAS  PubMed  Google Scholar 

  51. Cao S, Liu W, Li F, Zhao W, Qin C. Decreased expression of lncrna gas5 predicts a poor prognosis in cervical cancer. Int J Clin Exp Pathol. 2014;7(10):6776.

    PubMed  PubMed Central  Google Scholar 

  52. Sun M, Jin F-Y, Xia R, Kong R, Li J-H, Xu T-P, Liu Y-W, Zhang E-B, Liu X-H, De W. Decreased expression of long noncoding rna gas5 indicates a poor prognosis and promotes cell proliferation in gastric cancer. BMC Cancer. 2014;14(1):1–12.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We would like to thank all authors of the cited references.

Funding

This research was funded by the National Natural Science Foundation of China (Grant 61803151, 62072172).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: L-HP, ZW and L-QZ; Funding acquisition: L-HP, L-QZ; Investigation: L-HP and ZW; Methodology: L-HP and ZW; Project administration: L-HP, L-QZ; Software: ZW; Validation: ZW, X-FT; Writing – original draft: L-HP; Writing – review and editing: L-HP and ZW. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lihong Peng.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, L., Wang, Z., Tian, X. et al. LPI-deepGBDT: a multiple-layer deep framework based on gradient boosting decision trees for lncRNA–protein interaction identification. BMC Bioinformatics 22, 479 (2021). https://doi.org/10.1186/s12859-021-04399-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12859-021-04399-8

Keywords