 Proceedings
 Open Access
 Published:
A probabilistic coevolutionary biclustering algorithm for discovering coherent patterns in gene expression dataset
BMC Bioinformatics volume 13, Article number: S12 (2012)
Abstract
Background
Biclustering has been utilized to find functionally important patterns in biological problem. Here a bicluster is a submatrix that consists of a subset of rows and a subset of columns in a matrix, and contains homogeneous patterns. The problem of finding biclusters is still challengeable due to computational complex trying to capture patterns from twodimensional features.
Results
We propose a Probabilistic COevolutionary Biclustering Algorithm (PCOBA) that can cluster the rows and columns in a matrix simultaneously by utilizing a dynamic adaptation of multiple species and adopting probabilistic learning. In biclustering problems, a coevolutionary search is suitable since it can optimize interdependent subcomponents formed of rows and columns. Furthermore, acquiring statistical information on two populations using probabilistic learning can improve the ability of search towards the optimum value. We evaluated the performance of PCOBA on synthetic dataset and yeast expression profiles. The results demonstrated that PCOBA outperformed previous evolutionary computation methods as well as other biclustering methods.
Conclusions
Our approach for searching particular biological patterns could be valuable for systematically understanding functional relationships between genes and other biological components at a genomewide level.
Background
Since many biological data could be represented as a twodimensional matrix, it is important to find the hidden structure contained within such a structure. Here, the hidden structure can mean the clusters embedded in the subspace in a highdimensional dataset [1]. The problem of finding these structures can be solved using biclustering, which is also known as coclustering or block clustering [2–5]. A bicluster is a submatrix that consists of a subset of the rows (e.g., genes) and a subset of columns (e.g., conditions) in the matrix. The purpose of biclustering is to find the submatrix that consists of homogeneous elements in rows, columns, or both. Biclustering has been applied to diverse areas such as frequent itemsets, information retrieval, and gene expression analysis [4, 6].
Biclustering has been intensively studied in molecular biology research, as the expression levels of thousands of genes can be measured experimentally using microarrays [7]. DNA microarray data are represented as a matrix of expression levels of genes under different conditions corresponding to a set of rows and a set of columns. Here, the conditions usually include the environment, diseases, and tissues. The biclustering algorithm tries to find a subset of the genes representing similar behavior under multiple conditions. The biclustering problem is known as an NPhard combinatorial problem [2].
Biclustering problems are more complex than oneway clustering problems, because of the coupled landscapes of their search space. Biclustering problems may reflect the issues encountered in evolving the interdependent subcomponents considered in coevolutionary learning. In biclustering problems, the rows and columns of a matrix can be considered as interdependent subcomponents. If a biclustering algorithm is permitted to interact between these subcomponents, then it can search efficiently in a coupled landscape. For example, Potter and De Jong suggested the potential problemsolving capability of cooperative coevolutionary systems [8, 9] and following study of Zaritsky and Sipper presented good results for the Shortest Common Superstring (SCS) problem, using a cooperative coevolutionary algorithm [10].
Here, we propose a Probabilistic COevolutionary Biclustering Algorithm (PCOBA) to find functional groups of genes and corresponding conditions from microarray datasets. It is based on the concept of coevolutionary learning and probabilistic searching. The most distinctive idea of PCOBA is that it decomposes the entire search space into subcomponents to discover hidden patterns in the matrix. In this algorithm, two populations, corresponding to a subset of rows and a subset of columns, are maintained. Coevolutionary learning evolves the two different populations within the context of each other [11–13]. PCOBA guides these populations towards the minimum of the objective function representing the quality of the biclusters through cooperation between two populations.
When applied to synthetic datasets and the microarray data of yeast, the results demonstrate PCOBA incorporating probabilistic searching improves its ability of finding biclusters. The resulting patterns are well enriched to known annotations that are consistent with biological knowledge. Our approach for searching important biological patterns could be utilized to find the uncovered relationships between genes and other biological components at a genomewide level.
Methods
Biclustering of microarray data
In gene expression data, it is defined as a subset of genes and a subset of the conditions. Let G = {g_{1}, g_{2}, ..., g_{ N }} be a set of genes and C = {c_{1}, c_{2}, ..., c_{ M }} be a set of conditions, such as different tissue samples. The data can be represented as an N × M matrix with real values, denoted as E. Here each entry, e_{ ij }, in E indicates the expression level of a gene, g_{ i }, under a specific condition, c_{ j }.
Let I be the set of row indices belonging to a row cluster, and J be the set of column indices belonging to a column cluster, where I ⊆ {1,...,N} and J ⊆ {1,...,M}. Thus, a bicluster is a submatrix, B = (I, J), I≤N and J≤M, where I and J indicate the set of genes (rows) and conditions (columns), respectively. The volume of a bicluster, (I, J), is defined as the number of entries, e_{ ij }, i ∈ I and j ∈ J.
An example of a data matrix, E, and a bicluster, (I, J), is shown in Figure 1. In this example, the bicluster could be B = ({g_{1}, g_{2}, g_{5}, g_{8}}, {c_{2}, c_{3}, c_{5}}) in the expression matrix.
To find a bicluster with the required quality, we first consider the mean squared residue (MSR), as proposed by Cheng and Church [2]. This is the squared mean residue of all the elements in a submatrix, (I, J),
where h_{ ij } is the residue of an element e_{ ij } in the bicluster determined by index sets I and J. The residue of an element e_{ ij } is defined as
The residue is the difference between the actual value of an entry and the expectation value of an entry. As the residue of an entry decreases, its coherence in its rows and columns gets stronger. Here
where e_{ iJ } indicates the mean of the entries in row i, of which column indices are in J. e_{ Ij } indicates the mean of the entries in column j, of which row indices are in I. e_{ IJ } is the mean of all the entries in the submatrix consisting of I and J.
If only MSR is applied to measure the quality of a bicluster, then the trivial biclusters, such as biclusters showing no fluctuation in expression level can be found. The raw variance reject any trivial biclusters as follows,
By adding this term to an objective function, it is possible to detect fluctuations in the gene expression levels under some conditions or samples.
To find a bicluster, we present the objective function to minimize it by employing some characteristics.

Minimizing the mean squared residue, H_{ IJ }. If a mean squared residue of a specific bicluster has lower than a parameter value, δ, then its bicluster is denoted as δbicluster.

Maximizing variance, coupled with highly coherent biclusters.

Maximizing volume, which means a large number of genes and conditions.
Probabilistic coevolutionary biclustering
Various attempts have been made to find biclusters in microarray data [2, 14–16]. Several evolutionary algorithms for biclustering have also been proposed. Bleuler et al., introduced an evolutionary algorithm coupled with previous biclustering algorithms [17]. Mitra et al., proposed a multiobjective evolutionary biclustering algorithm incorporating local search strategies [18]. They demonstrated that evolutionary algorithms can successfully improve the quality of biclusters. The search strategy of our algorithm is different from those using conventional operators. Our algorithm utilizes the global statistical information of two cooperative populations so that its ability to search biclusters is more effective. The key idea is that the algorithm coevolves two populations for a gene set and a condition set, as the one is adapted cooperatively to the other.
The pseudo code of PCOBA is shown in Figure 2. Each individual in the population of the gene (or condition) sets is encoded using binary vectors that represent a subset selected from the gene (or condition) set. The fitness of each individual is determined by the degree of cooperation between the selected one and individuals of the other population. The two populations are updated using statistical information extracted from the previous populations and mutation operator. After setting the parameters, the initial populations, PopG of size μ and PopC of size ν, are created randomly from the gene sets and the condition sets. Each individual is evaluated by measuring the fitness functions. Then, sets of the best individuals, Sg and Sc are selected from the current populations. Next, the probabilities PG and PC are updated using the update rule based on the distribution of the selected individuals. Populations of the next generation are generated based on the current updated probability vectors.
Coevolutionary optimization
The population of the gene set, Pop_{ G } and that of the condition set, Pop_{ C } consist of {x_{1}, x_{2}, ..., x_{ μ }} and {y_{1}, y_{2}, ..., y_{ ν }}, respectively. Here, each individual x_{ i } is encoded by a binary string, \left({x}_{i}^{1},{x}_{i}^{2},\dots ,{x}_{i}^{N}\right)\in {\left\{0,1\right\}}^{N}, that represents the presence of several genes among a set of genes, {g_{1}, g_{2}, ..., g_{ N }}.
In addition, y_{ j } for a given set of conditions is encoded in the same way as x_{ i } is. Therefore, the total search space is Ω = {0, 1}^{N} + {0, 1}^{M}. A bicluster, (I, J), is an index with a value = 1 in (x_{ i }, y_{ j }) pair, i = 1,...,N and j = 1,...,M.
Fitness evaluation
The score function is designed to measure the quality of a bicluster [19]. The minimum score denotes the best quality that should have a low mean squared residue, high variance, and large volume. This bicluster may satisfy that the expression patterns of many genes are similar in many different conditions.
It consists of three terms, as follows
First, RES indicates the residue score, and is measured by
If H_{ IJ } is greater than δ, then RES reflects the mean squared residue, else it is set as a constant. Here δ is predefined by user. When RES is a constant, the fitness can concentrate more on the variance and volume terms.
Second, the variance term is
Here, w_{ b } is a parameter controlling the variance term among all the terms.
Finally, the volume term is
Here, w_{ v } is a control parameter used to set an importance to the volume term among the terms. The terms w_{ g } and w_{ c } are weight parameters used to keep a balance between the genes and conditions.
The fitness of each individual is measured from the scores defined in previous equation, and is obtained when it forms the complete solution (i.e., the bicluster) with an individual of the other species. An individual of the other species is referred to as a "collaborator''. The fitness of an individual x_{ i } is
and that of y_{ j } is
The minimum score determines the fitness of each individual when it is combined with individuals from the other population. In terms of coevolution, individuals are adapted cooperatively to the other population.
Here, it may be not necessary to evaluate the fitness to calculate the scores between all the x and y pairs. If the algorithm calculates all the scores of the pairs to select the best collaborator, then the evaluation cost will be high. To reduce the evaluation cost, we applied the following strategy. The algorithm selects the number of R, R≤M, randomly for each y_{ j }, and then it calculates their scores. Thus, the total number of evaluations is reduced by R⋅ν in each generation. Since this strategy can affect performance, appropriate R value (> = 10% of M) should be carefully chosen.
Probabilistic update of a population
The next population is generated by sampling with a probabilistic distribution and mutation operator. While the probabilistic update of populations utilizes statistical information from the previous generation, the mutation operator involves utilizing the location information in the solution space. A strategy related to the combination of an EDA and a conventional operator [20, 21] can improve the performance with regards to the optimality and convergence of conventional genetic algorithms.
In probabilistic learning, two populations, Pop(G) and Pop(C), maintain probability vectors, {P}_{G}=\left({p}_{g}^{1},\phantom{\rule{1em}{0ex}}{p}_{g}^{2},...,{p}_{g}^{N}\right) for the gene set Pop(G) and {P}_{C}=\left({p}_{c}^{1},\phantom{\rule{1em}{0ex}}{p}_{c}^{2},...,{p}_{c}^{M}\right) for the condition set Pop(C), respectively. The initial vector has a uniform distribution. The probabilities are updated using the following equations,
where α ∈ (0, 1) and β ∈ (0, 1) are the parameters for controlling the updates. This updating rule is similar to the populationbased incremental learning (PBIL) algorithm [22]. In each generation, two sets of best individuals, S_{ g } and S_{ c } are selected based on the fitness, and each probability is updated based on the fraction of the number, ones in the selected individuals. This probabilistic model for generating the next population is relatively simple.
We applied an additional mutation operator to generate offspring because it could be helpful for increasing the diversity of population. The number of individuals selected for mutation was different from S_{ g } and S_{ c }, and was set to maintain a sufficient selection pressure. Thus, half of the population size was generated by a probability distribution, and the other half was generated by a mutation operator.
Other evolutionary algorithms
Here, we describe three different types of algorithm for comparison with other evolutionary algorithms.
Genetic algorithm (GA)
The genotype of a bicluster is a continuous bit string, \left({x}_{i}^{1},{x}_{i}^{2},\dots ,{x}_{i}^{N},{y}_{i}^{1},{y}_{i}^{2},\dots ,{y}_{i}^{M}\right). Here, reproduction and mutation are used as genetic operators. A crossover operator was not applied in this study, since a crossover operator tends to form biclusters with a high volume, which interrupts to obtain good solutions. In reproduction, individuals were selected using a proportional selection. The population size was 100, and the mutation rate was set to 0.05.
Coevolutionary genetic algorithm (CGA)
Unlike a conventional genetic algorithm, the genotype of a bicluster is not a continuous bit string. The genotype of a CGA is separated into two parts. The genetic operators are the same as the genetic algorithm, and the method of evolution is the same as the PCOBA.
Estimation of the distribution algorithm (EDA)
The encoding of individuals here was the same as in the genetic algorithm. However, the next population was generated from a probability vector based on the PBIL algorithm and a mutation, such as the PCOBA. The probability vector was \left({p}_{g}^{1},{p}_{g}^{2},...,{p}_{g}^{N},{p}_{c}^{1},{p}_{c}^{2},...,{p}_{c}^{M}\right).
Results
Experimental data preparation and parameter setting
We performed experiments to show the performance of PCOBA, including both synthetic datasets and a yeast gene expression dataset. The synthetic datasets are E_{ a }, E_{ b }, and E_{ c }, which were noisy matrices like gene expression datasets. They had embedded homogenous block structures like submatrices coupled between genes and conditions. Their matrices were filled with random values ranging from 0 to 500, and then a fixed number of clusters were embedded. First, we examined whether the proposed PCOBA could find the single homogeneous block structure from E_{ a } which embeds only one bicluster. E_{ a } is the noisy matrix of 100 rows × 20 columns with single structure of (16 × 9).
Furthermore, we studied if PCOBA were able to find the multiple homogeneous block structures in E_{ b } embedding multiple biclusters. Although the volumes of datasets were relatively small, it could be difficult to find biclusters if a block is very homogeneous. Therefore, to make these kinds of matrices, we designed a block structure embedding more homogeneous blocks. E_{ b } contains three different structures (16 rows × 9 columns, 10 rows × 5 columns, and 10 rows × 10 columns) in the noisy matrix of 100 × 20. These structures were less than δ = 20. Here, δ is the threshold of residue score and lower score means high quality biclusters.
The E_{ c } was used to examine the ability of finding a bicluster from a higher dimensional dataset. Real datasets, such as gene expression data, are composed of large dimensional matrices. In general, if the dimension of a matrix gets larger, then the volume of the biclusters is increased. In addition, the matrix contains a higher number of biclusters. We designed the synthetic dataset, E_{ c }, considering these conditions. E_{ c } is a 1,500 × 30 matrix that contained three 100 × 15 structures. All the block structures were less than δ = 300.
The real datasets were gene expression profiles of yeast microarrays. Typically, a microarray experiment assesses the expression of a large number of genes under various conditions. These conditions may be a time series during a biological process, or a collection of different tissue samples, e.g., normal versus tumor tissues. The performance of our proposed algorithm was measured using the cell cycle expression data of a yeast Saccharomyces cerevisiae that was obtained from Tavazoie et al., [23]. The matrix dataset contains expression levels of 2,884 genes (rows) under 17 conditions (columns). In this matrix, missing values were replaced by sampled random numbers from a uniform distribution between 0 and 600.
The experimental parameters are listed in Table 1 (see Methods section). In the case of a dataset with large dimensions, we gave much weight to the volume term. In addition, we gave much weight to the gene rather than condition of the volume term. The terms S_{ g } and S_{ c } corresponded to 20% of the population size. The selection ratio of the best individuals for mutation was set to 50%, and the mutation rate was set to 0.01. These values were chosen empirically as the result of multiple runs. For example, when α and β were small, the algorithm showed a stable performance on the whole.
Searching biclusters using the PCOBA
We observed the characteristics on optimization, while PCOBA was running with parameter setting of Table 1. Figure 3 shows the simulation results of PCOBA using the synthetic dataset, E_{ a }. The fitness decreased rapidly during the first 20 generations. The score curve was similar to the fitness curve. This means that PCOBA concentrated on the mean residue score by about the 20th generation. In Figure 3(c), the variance is seen to increase after about 20 generations. Although the variance fluctuated over the generation in each run, the general trend is revealed by plotting averages of variants. The volume curve is shown in Figure 3(d). After the algorithm reached a minimum volume, then the volume size increased continually. Here, though we demonstrated an optimization process with E_{ a } dataset, PCOBA also tends to be similar characteristics using other dataset.
Comparison with other evolutionary algorithms
In this section, we present a comparison of the performance between PCOBA and other evolutionary algorithms. The purpose of this comparison was to analyze the effect of coevolution, an estimation of the distribution, and finally the potential synergy of two different strategies.
We applied four different algorithms, including Genetic Algorithm (GA), Coevolutionary Genetic Algorithm (CGA) [11], Estimation of the Distribution Algorithm (EDA) [24] and the proposed PCOBA, to the synthetic datasets. For a fair comparison, the number of evaluations was the same for all algorithms. First, the runs for the E_{ a } and E_{ b } datasets were terminated after the following iterations. For GA and EDA, the number of iterations was set to 100 populations × 1,000 generations. For CGA and PCOBA, it was set to 100 populations × 10 selected genes × 100 generations. Here 10 selected genes correspond to R value (see Methods section) to reduce the evaluation cost. Second, the number of iterations for the E_{ c } dataset was set to 1,000 populations × 1,000 populations for GA and EDA. For CGA and PCOBA, it was set to 1,000 populations × 10 selected genes × 100 generations.
A comparison of the performance between PCOBA and other evolutionary algorithms is shown in Table 2. The results contain the averages and standard deviations after 100 runs. The fitness was mostly affected by the residue score. For E_{ a } and E_{ b } datasets, the residue scores of PCOBA outperformed the other three algorithms. Our algorithm could find a bicluster including coherent elements better than a conventional evolutionary algorithm, such as the simple GA, could, whereas GA often failed to find a homogenous block structure. The higher standard deviation of the scores and fitness was interpreted as an open failure. Although CGA and EDA showed better residue scores than GA did, they were not superior to PCOBA. EDA tended to improve the volume score, whereas CGA tended to focus on the residue score. This result may indicate that PCOBA takes advantage of both algorithms. When E_{ b } formed multimodal landscapes, our analysis tried to find a single local optimum less than δ. In the E_{ b } dataset embedding multiple homogenous blocks, the fitness values of all algorithms were better than those using the E_{ a } dataset.
Usually, real datasets, such as gene expression data, have large dimensions and contain multiple homogenous blocks, and it is difficult to obtain good solutions using a real dataset. Thus, E_{ c } was utilized as an alternative dataset to evaluate the performance considering the scalability in the dataset size. All the algorithms found scores less than δ. The average scores of the three algorithms were little different. However, PCOBA had a high value for the volume term.
Comparison with other biclustering algorithms
We compared the performance with previous Cheng and Church (CC) and the Order Preserving Submatrix (OPSM) biclustering algorithms using the cell cycle expression data of a yeast Saccharomyces cerevisiae. The CC algorithm was proposed by Cheng and Church [2] and employs the heuristic in a relaxed "greedy" search. We set the parameter of the CC algorithm, δ, with the identical value to our parameter. The OPSM was introduced by BenDor et al., [25]. It was designed to discover biclusters exhibiting coherent behavior in the columns. Thus, this algorithm focuses on the relative order of the columns.
The performance of the three algorithms is presented in Table 3. All the average and standard deviation values are the result of the ten best biclusters after one run. The residue score of our algorithm was similar to that of the CC algorithm. The average of residue score was less than 220. The average variance of PCOBA was marginally better than that of the CC algorithm. Although the OPSM algorithm yielded highvariance results, its residue score was inferior to those of PCOBA and the CC algorithm. As the OPSM algorithm induces a high variance, then it was easier to show poorer residue scores. This may be due to the characteristic of the OPSM algorithm that focuses on coherent behavior to find biclusters. In respect to the volume, the average volumes of the three algorithms were similar. However, the CC and the OPSM algorithms tended to find larger gene sets. Among all the biclusters they found, the volume of only one bicluster dominated the others. The biclusters found by PCOBA were not larger than the volume size, 200, but PCOBA balanced suitably the three terms as a whole, in that it outperformed other algorithms.
Functional analysis of the discovered clusters by PCOBA
To validate the discovered biclusters, we analysed the functional correlations between clustered genes by Protein Interaction Network Analysis (PINA) [26] for yeast dataset. We show two biclusters with more biological significance in this study. Table S1 (Additional File 1) presents the identified two biclusters with most enriched GO biological process terms and KEEG pathways (pvalue < 0.01). In particular, 'cell cycle' is exactly assigned as an enriched pathway in Cluster I, of which members are highly modulated by protein interaction. 'metabolic process' related terms are enriched in Cluster II. It has been known that metabolism of methionine has been associated with cell cycle progression [27]. These properties confirm the biological relevance of the identified biclusters.
Conclusions
We have proposed the biclustering algorithm (PCOBA) that can cluster the rows and columns in a twodimensional matrix simultaneously, based on coevolutionary searching. PCOBA can be considered to be a synergistic optimization technique that combines a coevolutionary search with a populationbased probabilistic search. In particular, it is a novel algorithm that can obtain highly correlated patterns from variables of a twoway problem in a dataset having a matrix form. In detail, it could be an efficient procedure to discover coherent patterns, since our algorithm tries to decompose a task using coevolutionary searching, and utilizes former global information in a complex problem of a largescale matrix. The performance of the proposed PCOBA was tested using synthetic datasets. Our algorithm outperformed conventional evolutionary computing methods including genetic algorithm, coevolutionary genetic algorithm, and estimation of distribution algorithm. In addition, the results from yeast expression datasets showed that our method can offer biclusters of higher quality in regards to coherent patterns. Our proposed method provides substantial guidance for the development of algorithms for finding hidden patterns from datasets in a matrix form that are generated in various research fields, including biology.
References
 1.
Yang J, Wang W, Wang H, Yu P: δCluster: capturing subspace correlation in a large data set. Proceedings of the 18th International Conference on Data Engineering 2002 (ICDE 2002). 2002, 517528. (ICDE 2002)
 2.
Cheng Y, Church G: Biclustering of expression data. Proceedings of International Society for Computational Biology 2000 (ISMB 2000). 2000, 93103. (ISMB 2000)
 3.
Gupta R, Rao N, Kumar V: Discovery of errortolerant biclusters from noisy gene expression data. BMC Bioinformatics. 2011, 12 (Suppl 12): S110.1186/1471210512S12S1.
 4.
Liu J, Li Z, Hu X, Chen Y, Park E: Dynamic biclustering of microarray data by multiobjective immune optimization. BMC Genomics. 2011, 12 (Suppl 2): S1110.1186/1471216412S2S11.
 5.
Smet R, Marchal K: An ensemble biclustering approach for querying gene expression compendia with experimental lists. Bioinformatics. 2011, 27 (14): 19481956. 10.1093/bioinformatics/btr307.
 6.
Dhillon IS, Mallela S, Modha DS: Information theoretic coclustering. Proceedings of the 9th International Conference on Knowledge Discovery and Data Mining 2003 (KDD 2003). 2003, 8998. (KDD 2003)
 7.
Madeira SC, Oliveira AL: Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2004, 1 (1): 2445. 10.1109/TCBB.2004.2.
 8.
Potter MA, De Jong KA: A cooperative coevolutionary approach to function optimization. Proceedings of the Third Conference on Parallel Problem Solving from Nature 1994 (PPSN 1994). 1994, 249257. (PPSN 1994)
 9.
Potter MA, De Jong KA: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evolutionary Computation. 2000, 8: 19. 10.1162/106365600568086.
 10.
Zaritsky A, Sipper M: Coevolving solutions to the shortest common superstring problem. BioSystems. 2004, 76: 209216. 10.1016/j.biosystems.2004.05.013.
 11.
Hillis DW: Coevolving parasites improve simulated evolution in an optimization procedure. Physica D. 1990, 42: 228234. 10.1016/01672789(90)900762.
 12.
Axelrod R: The evolution of strategies in the iterated prisoner's dilemma. Genetic Algorithms and Simulated Annealing. Edited by: Davis L. 1987, 3241.
 13.
Barricelli NA: Numerical testing of evolution theories, part I: theoretical introduction and basic tests. Acta Biotheoretica. 1962, 16: 6998. 10.1007/BF01556771.
 14.
Yang J, Wang W, Wang H, Yu P: Enhanced biclustering on expression data. Proceedings of the third IEEE Conference on Bioinformatics and Bioengineering 2003 (BIBE 2033). 2003, 321327. (BIBE 2033)
 15.
Wu CJ, Kasif S: GEMS: a web server for biclustering analysis of expression data. Nucleic Acids Research. 2005, 33: W596W599. 10.1093/nar/gki469.
 16.
Prelic A, Bleuler S, Zimmermann P, Wille A, Buhlmann P, Gruissem W, Hennig L, Thiele L, Zitzler E: A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics. 2006, 22 (9): 11221129. 10.1093/bioinformatics/btl060.
 17.
Bleuler S, Prelić A, Zitzler E: An EA framework for biclustering of gene expression data. Proceedings of Congress on Evolutionary Computation 2004 (CEC2004). 2004, 166173. (CEC2004)
 18.
Mitra S, Banka H, Pal SK: A MOE framework for biclustering of microarray data. Proceedings of the 18th International Conference on Pattern Recognition 2006 (ICPR'06). 2006, 11541157. (ICPR'06)
 19.
Divina F, AguilarRuiz J: Biclustering of expression data with evolutionary computation. IEEE Transactions on Knowledge & Data Engineering. 2006, 18 (5): 590602.
 20.
Pena JM, Robles V, Larranaga P, Herves V, Rosales F, Perez MS: GAEDA: Hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. Proceedings of 17th Int. Conf. Ind. & Eng. Appl. Artif. Intell. & Expert Syst. 2004, 361371.
 21.
Zhang Q, Sun J, Tsang E: An evolutionary algorithm with guided mutation for the maximum clique problem. IEEE transaction on Evolutionaly Computation. 2005, 9 (2): 192200. 10.1109/TEVC.2004.840835.
 22.
Baluja S: Populationbased incremental learning: a method for integrating genetic search based function optimization and competitive learning. School of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep. CMUCS94163. 1994
 23.
Tavazoie S, Hughes J, Campbell M, Cho R, Church G: Systematic determination of genetic network architecture. Nature Genetics. 1999, 22: 281285. 10.1038/10343.
 24.
Pelikan M, Goldberg DE, Lobo F: A survey of optimization by building and using probabilistic models. Computational Optimization and Applications. 2002, 21 (1): 520. 10.1023/A:1013500812258.
 25.
BenDor A, Chor B, Karp R, Yakhini Z: Discovering local structure in gene expression data: the orderpreserving submatrix problem. J Comput Biol. 2003, 10: 373384. 10.1089/10665270360688075.
 26.
Cowley M, Pinese M, Kassahn K, Waddell N, Pearson J, Grimmond S, Biankin A, Hautaniemi S, Wu J: PINA v2.0: mining interactome modules. Nucleic Acids Research. 2012, 40: D862865. 10.1093/nar/gkr967.
 27.
Dummitt B, Micka WS, Chang YH: NTerminal methionine removal and methionine metabolism in Saccharomyces cerevisiae. Journal of Cellular Biochemistry. 2003, 89: 964974. 10.1002/jcb.10566.
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20120005643) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A1A2002804).
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 17, 2012: Eleventh International Conference on Bioinformatics (InCoB2012): Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S17.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
JGJ implemented the method and wrote the manuscript. SJK analyzed the data and wrote the manuscript. SYS wrote the manuscript. BTZ supervised the study.
Electronic supplementary material
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Joung, JG., Kim, SJ., Shin, SY. et al. A probabilistic coevolutionary biclustering algorithm for discovering coherent patterns in gene expression dataset. BMC Bioinformatics 13, S12 (2012). https://doi.org/10.1186/1471210513S17S12
Published:
DOI: https://doi.org/10.1186/1471210513S17S12
Keywords
 Synthetic Dataset
 Gene Expression Dataset
 Volume Term
 Conventional Genetic Algorithm
 Mean Square Residue