Comparative analysis and prediction of nucleosome positioning using integrative feature representation and machine learning algorithms

Background Nucleosome plays an important role in the process of genome expression, DNA replication, DNA repair and transcription. Therefore, the research of nucleosome positioning has invariably received extensive attention. Considering the diversity of DNA sequence representation methods, we tried to integrate multiple features to analyze its effect in the process of nucleosome positioning analysis. This process can also deepen our understanding of the theoretical analysis of nucleosome positioning. Results Here, we not only used frequency chaos game representation (FCGR) to construct DNA sequence features, but also integrated it with other features and adopted the principal component analysis (PCA) algorithm. Simultaneously, support vector machine (SVM), extreme learning machine (ELM), extreme gradient boosting (XGBoost), multilayer perceptron (MLP) and convolutional neural networks (CNN) are used as predictors for nucleosome positioning prediction analysis, respectively. The integrated feature vector prediction quality is significantly superior to a single feature. After using principal component analysis (PCA) to reduce the feature dimension, the prediction quality of H. sapiens dataset has been significantly improved. Conclusions Comparative analysis and prediction on H. sapiens, C. elegans, D. melanogaster and S. cerevisiae datasets, demonstrate that the application of FCGR to nucleosome positioning is feasible, and we also found that integrative feature representation would be better.


Background
The nucleosome is the basic structural unit of eukaryotic chromatin. It is formed by the combination of histones and DNA. The core is an octamer formed by two copies of each histones H2A, H2B, H3 and H4, DNA is wound around it about 1.65 turns. Among them, the DNA wrapped around the octamer is called core DNA, which is 147 base pairs in length; the DNA sequence that connects two adjacent nucleosomes is called linker DNA, which ranges from 20 to 60 base pairs [1]. In eukaryotic cells, nucleosomes play a crucial role in the process of genome expression, DNA replication, DNA repair and transcription [2][3][4][5][6]. In addition, studies have demonstrated that abnormal histone modifications in the nucleosome structure are directly related to diseases such as tumors [7] and lupus erythematosus [8]. Therefore, the mechanism of nucleosome positioning in DNA sequence has an extremely important research value, which is also one of the hot spots in current epigenetics research.
The precise position of the nucleosome on the DNA sequence in the whole genome is called nucleosome positioning. Early experiments mainly used micrococcal nuclease to process chromatin to achieve nucleosome positioning [9]. In recent years, benefiting from the development and application of high-throughput experimental techniques, such as chromatin immunoprecipitation-chip (ChIP-chip), chromatin immunoprecipitation sequencing (ChIP-Seq), many breakthroughs have been made in nucleosome positioning experiments. The nucleosome positioning maps of different species such as Saccharomyces cerevisiae [10,11], Homo sapiens [12], Caenorhabditis elegans [13], Drosophila melanogaster [14], etc. have been obtained, which provides a large amount of data basis for researchers to carry out theoretical research and prediction.
Much of the research in nucleosome positioning is based on DNA sequence analysis [15,16]. The DNA sequence consists of four nucleotides: A, T, C and G. Studies have shown that the affinity between genomic DNA sequences and histones is clearly dependent on sequence order, which indicates that the DNA sequence order does affect the position of nucleosome formation. Although some provide the support that nucleosome positioning is affected by multiple factors such as DNA sequence, ATP-dependent nucleosome remodeling enzymes and transcription factors [17,18]. Many researchers used sequence analysis methods to express nucleosome DNA sequence characteristics and then performed nucleosome positioning and recognition.
In the past decade, with the popularity of machine learning algorithms, a multitude of computational models based on DNA sequence information have been proposed. Chen et al. proposed the "iNuc-Physchem" nucleosome prediction model using 12 physicochemical features of DNA, which identified the core DNA and linker DNA of the yeast genome nucleosome [19]. Later, the research group also established a biophysical model based on the deformation energy of DNA sequences to predict the sequence of nucleosomes [20]. Guo et al. used pseudo k-tuple nucleotide composition to successfully express the feature vector of the DNA sequence, and used the support vector machine (SVM) classifier to train H. sapiens, C. elegans and D. melanogaster [21]. 3LS model used similar methods and combined the distribution of different numbers of nucleotide combinations in the sequence to further improve the prediction accuracy [22]. ZCMM model based on the Z-curve (z-curve) theory and the position weight matrix (PWM), the prediction performance is excellent on D. melanogaster [23].
Deep learning is also applied to nucleosome positioning and achieved good prediction quality. These deep learning models all used one-hot encoding. Gangi et al. [24] constructed a deep learning model that integrates convolutional layers and long short-term memory networks. LeNup model added the Inception module and gated convolutional network to the convolutional neural network to improve the nucleosome positioning [25].
In this work, we firstly will use frequency chaos game representation to construct DNA sequence features. This feature representation method has not been used in nucleosome positioning before. Secondly, we also integrated FCGR with other feature vectors and adopted the principal component analysis (PCA) algorithm to achieve the feature dimensionality reduction. Finally, various machine learning algorithms such as support vector machine (SVM), extreme learning machine (ELM), extreme gradient boosting (XGBoost), multi-layer perceptron (MLP), and convolutional neural networks (CNN) will be used to perform comparative analysis and prediction of nucleosome positioning.

Rule of performance evaluation
Cross validation is a statistical analysis method used to validate the model. The basic idea is to divide the original data into a training set and a test set. First, use the training set to train the model, and then use the test set to test the classification or prediction performance of the obtained model. In this work, we used K-fold cross-validation to evaluate the performance of the predictor through four parameters: sensitivity ( S n ), specificity ( S p ), accuracy (ACC), and Mathew's correlation coefficient (MCC). The specific definition are as follows: where TP, TN, FP and FN are the numbers of true positives, true negatives, false positives and false negatives, respectively [25]. S n is the true positive rate. When S n = 1, it means that all core DNA of nucleosomes have been correctly predicted.S p is true negative rate. When S p = 1, it means that all linker DNAs are correctly predicted. ACC reflects the ratio of the number of correctly predicted samples of each category to the total sample. MCC comprehensively evaluates the prediction results. MCC ∈ [− 1,1]. MCC = − 1 means that the correlation is completely opposite. MCC = 1 means that the prediction result is completely correlated with the true category. MCC = 0 means that the prediction is completely random.
Receiver operating characteristic curve (ROC curve) and area under curve (AUC) are often used to evaluate the pros and cons of a binary classifier. Area under curve (AUC) is the area under the Roc curve, usually between 0.5 and 1. As a value, AUC can be used to evaluate the quality of the classifier more intuitively. The larger the AUC value, the better. Taking into account the length of the paper, this paper only calculates the AUC value and does not draw the ROC curve one by one. (1)

Performance of predictors
According to the characteristics of FCGR described above, the different values of K nucleotide will affect the feature expression of the DNA sequence [26]. A large K value means a high feature dimension. And generally, high-dimensional features are relatively sparse, and the fitting quality may not be outstanding. Obviously, choosing an appropriate K value will have a greater impact on the classification effect of each classifier. Some studies have combined DNA sequence features [22,23,27,28]. Similarly, FCGR can also use different combinations of K nucleotide values as feature vectors.

Feasibility of FCGR
In this work, we flatten the FCGR matrix into a normalized vector (1-D) corresponding to the frequency of K nucleotides as the input of SVM and ELM [27]. The input of MLP and CNN models are not only single-channel FCGR images (2D) [26,27], but also multiple K-value images, the image size is 64 × 64. For the input of multi-K-value images, we leveraged multiple channels to feed in the combination of K values when training the model, and used simple averaging to calculate the final prediction result. To find the appropriate value of K or combination, we use 10-fold cross-validation. Figure 1 shows the classification accuracy of each classifier with different K values and combinations. For SVM, the accuracy of H. sapiens, C. elegans reaches its peak with K = 1, 2 and 4; the accuracy of D. melanogaster was the highest with K = 2 and 4. For ELM, the accuracy of D. melanogaster reaches an peak when K = 2; the accuracy of H. sapiens reaches its peak when K = 2 and 4; the classification accuracy of C. elegans is best with K = 1, 2 and 4 like using SVM.
For MLP, the accuracy of H. sapiens and D. melanogaster reaches its peak with K = 3, 4 and 5; the classification accuracy of C. elegans is best with K = 3 and 4. For CNN, H. sapiens have the best classification quality when using the FCGR image with K = 4; the accuracy of C. elegans reaches its peak with K = 4 and 5; the accuracy of D. melanogaster Fig. 1 The histogram (a-d) shows the accuracy of using SVM, ELM, MLP and CNN with K = 1, 2, 3, 4, 5 or combinations reaches its peak with K = 3, 4 and 5. Table 1 clearly shows the best prediction results for four species via 10-fold cross-validation.
For S. cerevisiae dataset, we used SVM, ELM and MLP to achieve S n = S p = ACC = MCC = AUC = 1 via 10-fold cross-validation when K = 3 or 4. There may be room for improvement in the predicted quality of the other three datasets.

Comparison of the results with integrative features
In addition, we also integrated FCGR with other feature representations [29][30][31][32], such as DAC, TAC, DACC, TACC, PC-PseDNC, PC-PseTNC, and input them into SVM and ELM. Besides, we added the extreme gradient boosting (XGBoost) algorithm. The comparative analysis results are shown in Tables 2, 3 and 4 respectively.
From the results in Tables 2, 3 and 4, the combination of FCGR and DAC as feature vectors have a greater prediction quality. XGBoost performance is relatively stable, and each prediction results have little difference, especially for inputting high-dimensional features. However, after some high-dimensional feature vectors are input into SVM and ELM, the prediction results are relatively poor. It shows that XGBoost is more suitable for processing high-dimensional features.

Comparison of the results with dimensionality reduction
Considering the high dimensionality of the integrative feature vector, it is possible that high-dimensional feature vectors would bring the curse of dimensionality, which leads to overfitting of the prediction result. Therefore, we also adopted the principal component analysis (PCA) algorithm [33] to achieve feature dimensionality reduction. Then, the feature vector after dimensionality reduction is input into SVM, ELM and XGBoost respectively. In the process of using PCA to dimensionality reduction, the cumulative contribution rate of the retained principal components will directly affect the  Tables 5, 6 and 7 respectively. From Tables 5, 6 and 7, We have noticed that the prediction quality has been improved after dimensionality reduction through PCA for H. sapiens. It is increased by 4.57%, 3.12%, 6.00%, 9.03%, 3.56% in ACC, S n , S p , MCC and AUC when we combined FCGR vectors and TAC for using with XGBoost. However, the prediction quality has been not improved significantly for C. elegans. Especially when ELM was used, its prediction quality decreased slightly. For D. melanogaster, similarly, there is no significant improvement.

Comparison with other algorithms
To verify the effectiveness of our method, we compared the prediction results of the optimal performing predictors in Tables 1, 2, 3 and 4 with other models using the same Table 2 The prediction results of integrative feature representation for H. sapiens via 10-fold crossvalidation by SVM, ELM and XGBoost All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1 datasets. DLNN-5 [24] is a deep learning model with a convolution kernel size of 5, and ZCMM [23] is based on SVM. Tables 8, 9, 10 and 11 shows that our methods perform prominently on H. sapiens and S. cerevisiae datasets. For S. cerevisiae dataset, we used SVM, ELM and MLP to achieve S n = S p = ACC = MCC = AUC = 1 via 10-fold crossvalidation when K = 3 or 4. Compared with the model that based on DNA deformation energy in the original paper [20], the prediction performance has been obviously lifted. For H. sapiens, combined FCGR vectors and TAC for using with XGBoost is higher than ZCMM in ACC, S n , S p , MCC, AUC by 10.87%, 15.58%, 5.23%, 21.25%, 8.81%, respectively; likewise, it is higher than DLNN-5 in ACC, S n , S p by 3.22%, 2.11%, 4.45%, respectively. The performance of CNN is slightly better than ZCMM and DLNN-5. For C. elegans, compared with ZCMM, we use ELM to increase the evaluation indicators by 2.20%, 10.64%, 1.56%, 13.15%, 3.01% when combined FCGR vectors with K = 1, 2 and 4. For D. melanogaster, our prediction accuracy is lower, and ZCMM's prediction accuracy (ACC) is the highest at 93.62%. Results imply that our final prediction is positive, it only performed unfavorably on the D. melanogaster dataset. Table 3 The prediction results of integrative feature representation for C. elegans via 10-fold crossvalidation by SVM, ELM and XGBoost

Best values are in bold
All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1

Comparison with other advanced methods
In addition to DLNN-5 and ZCMM models, there are some other advanced methods for nucleosome prediction in the same dataset. LeNup model utilizes improved convolutional neural networks, which adds inception modules and gated convolutional networks [25]. 3LS is based on the linear regression model [22]. LeNup used  LeNup has the best overall prediction effect. The accuracy of C. elegans is 0.9188, and the average accuracy of other species are also over 0.88. The prediction result of our method is relatively close to it on the H. sapiens dataset. For C. elegans, ELM with FCGR performs slightly worse than 3LS, ACC, S p , MCC, AUC decreased by 0.29%, 3.44%, 0.51%, 1.86% respectively.

Discussion
Firstly, the results in Table 1 and Fig. 1 clearly showed that the FCGR feature of the combined K value is better than the single K value, and the SVM output better prediction results. When training CNN and MLP models, we utilized multi-channel multiple K-value input images, and the prediction accuracy had been improved. All these indicated that FCGR feature combinations with different K values can better express sequence features, thereby improving models' prediction accuracy.
Secondly, we further integrated FCGR with other feature representations, and combined three types of machine learning algorithms to compare prediction results (Tables 2, 3, 4). Besides, we performed PCA dimensionality reduction processing on   (Tables 5,  6, 7). Although the overall prediction quality has improved after the PCA dimensionality reduction processing with the integrated feature, superior results are obtained for using FCGR feature representation. These also further illustrated the advantages of FCGR features representation.
Here we compared the results of the proposed method with other advanced algorithms. Slightly superior results are achieved with our algorithm on H. sapiens and S. cerevisiae datasets, but there are gaps in the other two datasets. On the one hand, it explains the feasibility of our method; on the other, our work has room for improvement.

Conclusions
In this work, we used FCGR to represent the features of the DNA sequence and applied it to the nucleosome positioning. Our experiments have achieved positive results. Especially when multiple features are used in combination, the prediction quality can be improved. The advantage of this representation is that the time consumed in the process of constructing features is shortened, and the features are clear and intuitive. The quality of integrating features representation is also acceptable. Particularly after we use Table 6 PCA dimensionality reduction results via 10-fold cross-validation for C. elegans improved. This demonstrates the feasibility of the method. In this paper, we also tried a simple CNN model with FCGR image and got mediocre results. Since deep learning is now increasingly used in bioinformatics. In the further research of nucleosome positioning, we will try to build a more efficient deep learning prediction model to achieve prediction of DNA represented in the form of images, such as FCGR image.

Dataset descriptions
To compare the results of the predictors, the datasets of this work downloaded from two published papers [20,21]. The first group of datasets involved H. sapiens, C. elegans and D. melanogaster from the paper by Guo et al. [21]. The length of each DNA sequence is 147 bp. The second dataset involved S. cerevisiae genome from the paper by Chen et al. [20]. The length of each DNA sequence is 150 bp. Both of the datasets contain two types of samples: nucleosome-forming sequences (positive data) and nucleosome-inhibiting sequences (negative data). And none of the sequences included has ≥ 80% pairwise sequence identity with any other. The details of the datasets are shown in Table 15.

DNA sequence feature representation
Except for the above mentioned, common DNA sequence representation methods include basic kmer (Kmer) [34], reverse complementary kmer (RevKmer) [35], etc. based Table 9 Comparison of our predictors with other models via 10-fold cross-validation for H. sapiens The on deoxyribonucleic acid composition, and some are based on the correlation between nucleotide physical and chemical indicators, such as dinucleotide-based autocovariance (DAC), trinucleotide-based autocovariance (TAC) [29], etc. and pseudo k-tuple nucleotide composition (PseKNC) [21] based on pseudo deoxyribonucleic acid composition. These feature representation methods have specific calculation formulas and iterative functions, and some calculations are more complex and require a long time. This paper will mainly use a simple and intuitive feature representation. Chaos game representation (CGR) is a graphical representation method of gene sequence based on chaos theory proposed by Jeffrey in 1990 [36]. The method is as follows: The four nucleotides {A, T, G, C} are located at the four vertices of the plane coordinate system, and the position of each nucleotide in the DNA sequence in the plane is P i . According to formula (2) draw the coordinate point of each nucleotide: (2) P i = 0.5 · (P i−1 + N i ), i = 1, . . . , L and P 0 = (0.5, 0.5) Among them, P 0 is the given starting point, L is the length of the DNA sequence, and N i represents the corresponding coordinate of the i-th nucleotide, where A = (0,0), T = (1,0), G = (1,1), C = (0,1). This method draws a corresponding image of a DNA sequence through the iterative function and makes the nucleotides in the sequence correspond to the points on the image one by one [36][37][38][39][40]. From Fig. 5, we can see the CGR graphical representation of the two types of sample sequences in the H. sapiens dataset.
Divide the CGR image into 2 K × 2 K sub-blocks and calculate the number of points appearing on each sub-block, we can determine the frequency of K nucleotide combinations, and then convert the CGR image into a 2 K × 2 K matrix, which is called frequency chaos game representation (FCGR) [39]. For example, we divided the CGR graph of Fig. 5a into a 2 3 × 2 3 matrix and calculated the number of occurrences of the midpoint of each sub-block, and obtain the frequency matrix shown in Table 16.
FCGR can be used not only as a numerical matrix, but also as a grayscale image. The original CGR image is divided into 4 K sub-blocks. The darker the sub-block, the more dots appear in the sub-blocks; the lighter sub-blocks, indicates that the number of dots in the color block is small, and the pixel value of the image is between 0 and 255 [39]. From Fig. 6, we can see the FCGR image of the sample sequence with K = 3, 4 and 5, respectively.

Support vector machine
Support vector machine (SVM) is a commonly used two-class classification model. Compared with other classification algorithms, it has a good classification effect and strong generalization ability on small data sets. It can also handle nonlinear classification problems through nuclear techniques. Thus, support vector machines have also been widely used in the field of bioinformatics [19,21,23]. Its basic idea is to map the sample from the original low-dimensional space to a high-dimensional space, so that the sample can find a partitioning hyperplane with the largest interval in the feature space, and separate samples of different categories. In this paper, we will use the python package (Scikit-learn 0.23), which can be downloaded from https ://sciki t-learn .org/stabl e/index .html. This package contains the SVM module, and the implementation is based on libsvm. We will train the SVM with the radial basis function (RBF) kernel, meanwhile two parameters will be considered: penalty parameter C and kernel coefficient Gamma. In the training process, we used the grid optimization method to determine the best values of the two parameters.

Extreme learning machine
Extreme learning machine (ELM) was proposed by Guang-Bin Huang. The algorithm is a new machine learning algorithm based on single hidden layer feedforward neural networks (SLFNs). Compared with traditional algorithms, ELM has a faster learning speed while maintaining learning accuracy. The core idea is to randomly select the input layer weight and hidden layer bias of the network, and get the corresponding hidden node output [41]. The network structure of ELM model is shown in Fig. 7.  The experiment reference used David Lambert's Python version of ELM resources, which can be downloaded from the ELM web portal (https ://www.ntu.edu.sg/home/ egbhu ang/). The code can be found on https ://githu b.com/dclam bert/Pytho n-ELM.

Extreme gradient boosting
Extreme gradient boosting (XGBoost) is an open source machine learning project developed by Tianqi Chen et al. [42]. It is one of the boosting algorithms, which has the characteristics of high efficiency, flexibility, high accuracy, and strong portability. It is applied in the field of biomedicine [43].
The idea of XGBoost algorithm is to continuously add trees and perform feature splitting to complete the construction of a tree. In the whole process, each addition of a tree is learning a new function to fit the residual of the previous prediction. When the training is completed, K trees will be obtained. If we want to predict the score of a sample, according to the features of this sample, each tree will fall to a corresponding leaf node,  Table 16 The frequency matrix of CGR image on H. sapiens nucleosome-inhibiting sample and each leaf node corresponds to a score. Finally, we only need to add up the scores corresponding to each tree to get the predicted value of the sample.

Multilayer perceptron
Multilayer perceptron (MLP) is also called deep neural networks (DNNs) [44]. MLP is based on the extension of perception. Multiple hidden layers are introduced between the input layer and the output layer, and the neurons between the layers are fully connected. So, both the hidden layer and the output layer in MLP are fully connected layers.
For the MLP, we used the AI Studio (https ://aistu dio.baidu .com/aistu dio/index ) experimental platform and PaddlePaddle (https ://www.paddl epadd le.org.cn/) deep learning framework provided by Baidu (https ://www.baidu .com/) to implement the experimental model with python (https ://www.pytho n.org/). MLP has three hidden layers with Relu activation function [45], each layer contains 50 neurons, the output layer uses a softmax activation function. Besides, MLP is trained by 5 epchos, with Adamax optimizer a learning rate of 0.001. Adamax algorithm is a variant of Adam algorithm based on infinite norm, which makes the algorithm of learning rate update more stable and simple [46]. We use cross entropy as our loss function.

Convolutional neural network
Convolutional Neural Network (CNN) is a representative algorithm of deep learning. It has demonstrated extraordinary advantages in the field of computer vision and has also been widely used in bioinformatics [47,48]. Convolutional neural networks can automatically extract features from input data. Compared with fully connected neural networks, it can simplify model complexity and effectively reduce model parameters [49]. Convolutional neural networks are applied to the general framework of image mode, mainly composed of convolutional layers, activation function, pooling layers and fully connected layers [49,50].
Owing to the limitation of the sample data volume, during the training process, we need to prevent the over-fitting problem faced by CNN, so we add a batch normalization (BN) layer [51] after the convolutional layer and add a dropout layer [52] after the fully connected layer. In our network, the convolutional layer uses a 3 × 3 convolution kernel, the number of filters in the first layer is 64, and the second is 32. The pooling layer use the maximum pooling of 2 × 2, with stride = 2. The first fully connected layer neurons' number is 100, and the second is 50. Then, the dropout probability of the subsequent dropout layer is 0.5. Except the softmax activation function used in the output layer, the activation function in the other layers is Relu. CNN is training by 20 epchos, with Adamax optimizer a learning rate of 0.001. The loss function is cross entropy. Like MLP, we also used the AI Studio experimental platform and PaddlePaddle deep learning framework provided by Baidu to implement the experimental model in python. The specific network structure is shown in Fig. 8.