 Research
 Open Access
 Published:
Incorporating biological prior knowledge for Bayesian learning via maximal knowledgedriven information priors
BMC Bioinformatics volume 18, Article number: 552 (2017)
Abstract
Background
Phenotypic classification is problematic because small samples are ubiquitous; and, for these, use of prior knowledge is critical. If knowledge concerning the featurelabel distribution – for instance, genetic pathways – is available, then it can be used in learning. Optimal Bayesian classification provides optimal classification under model uncertainty. It differs from classical Bayesian methods in which a classification model is assumed and prior distributions are placed on model parameters. With optimal Bayesian classification, uncertainty is treated directly on the featurelabel distribution, which assures full utilization of prior knowledge and is guaranteed to outperform classical methods.
Results
The salient problem confronting optimal Bayesian classification is prior construction. In this paper, we propose a new prior construction methodology based on a general framework of constraints in the form of conditional probability statements. We call this prior the maximal knowledgedriven information prior (MKDIP). The new constraint framework is more flexible than our previous methods as it naturally handles the potential inconsistency in archived regulatory relationships and conditioning can be augmented by other knowledge, such as population statistics. We also extend the application of prior construction to a multinomial mixture model when labels are unknown, which often occurs in practice. The performance of the proposed methods is examined on two important pathway families, the mammalian cellcycle and a set of p53related pathways, and also on a publicly available gene expression dataset of nonsmall cell lung cancer when combined with the existing prior knowledge on relevant signaling pathways.
Conclusion
The new proposed general prior construction framework extends the prior construction methodology to a more flexible framework that results in better inference when proper prior knowledge exists. Moreover, the extension of optimal Bayesian classification to multinomial mixtures where data sets are both small and unlabeled, enables superior classifier design using small, unstructured data sets. We have demonstrated the effectiveness of our approach using pathway information and available knowledge of gene regulating functions; however, the underlying theory can be applied to a wide variety of knowledge types, and other applications when there are small samples.
Background
Small samples are commonplace in phenotypic classification and, for these, prior knowledge is critical [1, 2]. If knowledge concerning the featurelabel distribution is available, say, genetic pathways, then it can be used to design an optimal Bayesian classifier (OBC) for which uncertainty is treated directly on the featurelabel distribution. As typical with Bayesian methods, the salient obstacle confronting OBC is prior construction. In this paper, we propose a new prior construction framework to incorporate gene regulatory knowledge via general types of constraints in the form of probability statements quantifying the probabilities of gene up and downregulation conditioned on the regulatory status of other genes. We extend the application of prior construction to a multinomial mixture model when labels are unknown, a key issue confronting the use of data arising from unplanned experiments in practice.
Regarding prior construction, E. T. Jaynes has remarked [3], “…there must exist a general formal theory of determination of priors by logical analysis of prior information – and that to develop it is today the top priority research problem of Bayesian theory”. It is precisely this kind of formal structure that is presented in this paper. The formal structure involves a constrained optimization in which the constraints incorporate existing scientific knowledge augmented by slackness variables. The constraints tighten the prior distribution in accordance with prior knowledge, while at the same time avoiding inadvertent over restriction of the prior, an important consideration with small samples.
Subsequent to the introduction of Jeffreys’ noninformative prior [4], there was a series of informationtheoretic and statistical methods: Maximal data information priors (MDIP) [5], noninformative priors for integers [6], entropic priors [7], reference (noninformative) priors obtained through maximization of the missing information [8], and leastinformative priors [9] (see also [10–12] and the references therein). The principle of maximum entropy can be seen as a method of constructing leastinformative priors [13, 14], though it was first introduced in statistical mechanics for assigning probabilities. Except in the Jeffreys’ prior, almost all the methods are based on optimization: max or minimizing an objective function, usually an information theoretic one. The leastinformative prior in [9] is found among a restricted set of distributions, where the feasible region is a set of convex combinations of certain types of distributions. In [15], several noninformative and informative priors for different problems are found. All of these methods emphasize the separation of prior knowledge and observed sample data.
Although the methods above are appropriate tools for generating prior probabilities, they are quite general methodologies without targeting any specific type of prior information. In that regard, the problem of prior selection, in any Bayesian paradigm, is usually treated conventionally (even “subjectively”) and independent of the real available prior knowledge and sample data.
Figure 1 shows a schematic view of the proposed mechanism for Bayesian operator design.
The a priori knowledge in the form of graphical models (e.g., Markov random fields) has been widely utilized in covariance matrix estimation in Gaussian graphical models. In these studies, using a given graphical model illustrating the interactions between variables, different problems have been addressed: e.g., constraints on the matrix structure [16, 17] or known independencies between variables [18, 19]. Nonetheless, these studies rely on a fundamental assumption: the given prior knowledge is complete and hence provides one single solution. However, in many applications including genomics, the given prior knowledge is uncertain, incomplete, and may be inconsistent. Therefore, instead of interpreting the prior knowledge as a single solution, e.g., a single deterministic covariance matrix, we aim at constructing a prior distribution on an uncertainty class.
In a different approach to prior knowledge, genegene relationships (pathwaybased or proteinprotein interaction (PPI) networks) are used to improve classification accuracy [20–26], consistency of biomarker discovery [27, 28], accuracy of identifying differentially expressed genes and regulatory target genes of a transcription factor [29–31], and targeted therapeutic strategies [32, 33]. The majority of these studies utilize gene expressions corresponding to subnetworks in PPI networks, for instance: mean or median of gene expression values in gene ontology network modules [20], probabilistic inference of pathway activity [24], and producing candidate subnetworks via a Markov clustering algorithm applied to high quality PPI networks [26, 34]. None of these methods incorporate the regulating mechanisms (activating or suppressing) into classification or featureselection to the best of our knowledge.
The fundamental difference of the work presented in this paper is that we develop machinery to transform knowledge contained in biological signaling pathways to prior probabilities. We propose a general framework capable of incorporating any source of prior information by extending our previous prior construction methods [35–37]. We call the final prior distribution constructed via this framework, a maximal knowledgedriven information prior (MKDIP). The new MKDIP construction constitutes two steps: (1) Pairwise and functional information quantification: information in the biological pathways is quantified by an information theoretic formulation. (2) Objectivebased Prior Selection: combining sample data and prior knowledge, we build an objective function, in which the expected mean loglikelihood is regularized by the quantified information in step 1. As a special case, where we do not have any sample data, or there is only one data point available for constructing the prior probability, the proposed framework is reduced to a regularized extension of the maximum entropy principle (MaxEnt) [38].
Owing to population heterogeneity we often face a mixture model, for example, when considering tumor sample heterogeneity where the assignment of a sample to any subtype or stage is not necessarily given. Thus, we derive the MKDIP construction and OBC for a mixture model. In this paper, we assume that data are categorical, e.g. binary or ternary geneexpression representations. Such categorical representations have many potential applications, including those wherein we only have access to a coarse set of measurements, e.g. epifluorescent imaging [39], rather than fineresolution measurements such as microarray or RNASeq data. Finally, we emphasize that, in our framework, no single model is selected; instead, we consider all possible models as the uncertainty class that can be representative of the available prior information and assign probabilities to each model via the constructed prior.
Methods
Notation
Boldface lower case letters represent column vectors. Occasionally, concatenation of several vectors is also shown by boldface lower case letters. For a vector a, a _{0} represents the summation of all the elements and a _{ i } denotes its ith element. Probability sample spaces are shown by calligraphic uppercase letters. Uppercase letters are for sets and random variables (vectors). Probability measure over the random variable (vector) X is denoted by P(X), whether it be a probability density function or a probability mass function. E _{ X }[f(X)] represents the expectation of f(X) with respect to X. P(xy) denotes the conditional probability P(X=xY=y). θ represents generic parameters of a probability measure, for instance P(XY;θ) (or P _{ θ }(XY)) is the conditional probability parameterized by θ. γ represents generic hyperparameter vectors. π(θ;γ) is the probability measure over the parameters θ governed by hyperparameters γ, the parameters themselves governing another probability measure over some random variables. Throughout the paper, the terms “pathway” and “network” are used interchangeably. Also, the terms “feature”’ and “variable” are used interchangeably. \(\mathcal {M}ult(\boldsymbol {p};n)\) and \(\mathcal {D}(\boldsymbol {\alpha })\) represent a multinomial distribution with vector parameter p and n samples, and a Dirichlet distribution with vector α, respectively.
Review of optimal Bayesian classification
Binary classification involves a feature vector X=(X _{1},X _{2},…,X _{ d })^{T}∈ℜ^{d} composed of random variables (features), a binary random variable (label) Y and a classifier ψ(X) to predict Y. The error is ε[ψ]=P(ψ(X)≠Y). An optimal classifier, ψ _{bay}, called a Bayes classifier, has minimal error, called the Bayes error, among all possible classifiers. The underlying probability model for classification is the joint featurelabel distribution. It determines the class prior probabilities c _{0}=c=P(Y=0) and c _{1}=1−c=P(Y=1), and the classconditional densities f _{0}(x)=P(xY=0) and f _{1}(x)=P(xY=1). A Bayes classifier is given by
If the featurelabel distribution is unknown but belongs to an uncertainty class of featurelabel distributions parameterized by the vector θ∈Θ, then, given a random sample S _{ n }, an optimal Bayeisan classifier (OBC) minimizes the expected error over Θ:
where the expectation is relative to the posterior distribution π ^{∗}(θ) over Θ, which is derived from the prior distribution π(θ) using Bayes’ rule [40, 41]. If we let θ _{0} and θ _{1} denote the class 0 and class 1 parameters, then we can write θ as θ=[c,θ _{ 0 },θ _{ 1 }]. If we assume that c,θ _{0},θ _{1} are independent prior to observing the data, i.e. π(θ)=π(c)π(θ _{0})π(θ _{1}), then the independence is preserved in the posterior distribution π ^{∗}(θ)=π ^{∗}(c)π ^{∗}(θ _{0})π ^{∗}(θ _{1}) and the posteriors are given by \(\pi ^{\ast }(\boldsymbol {\theta }_{y})\propto \pi (\boldsymbol {\theta } _{y})\prod _{i=1}^{n_{y}}f_{\boldsymbol {\theta }_{y}}(\mathbf {x}_{i}^{y}y)\) for y=0,1, where \(f_{\boldsymbol {\theta }_{y}}(\mathbf {x}_{i}^{y}y)\) and n _{ y } are the classconditional density and number of sample points for class y, respectively [42].
Given a classifier ψ _{ n } designed from random sample S _{ n }, from the perspective of meansquare error, the best error estimate minimizes the MSE between its true error (a function of θ and ψ _{ n }) and an error estimate (a function of S _{ n } and ψ _{ n }). This Bayesian minimummeansquareerror (MMSE) estimate is given by the expected true error, \(\widehat {\varepsilon }(\psi _{n},S_{n})=\mathrm {E}_{\boldsymbol {\theta } }[\varepsilon (\psi _{n},\boldsymbol {\theta })S_{n}]\), where ε(ψ _{ n },θ) is the error of ψ _{ n } on the featurelabel distribution parameterized by θ and the expectation is taken relative to the prior distribution π(θ) [42]. The expectation given the sample is over the posterior probability. Thus, \(\widehat {\varepsilon }(\psi _{n},S_{n})=\mathrm {E}_{\pi ^{\ast }}[\varepsilon ]\).
The effective classconditional density for class y is defined by
Θ _{ y } being the space for θ _{ y }, and an OBC is given pointwise by [40]
For discrete classification there is no loss in generality in assuming a single feature X taking values in the set {1,…,b} of “bins”. Classification is determined by the class 0 prior probability c and the classconditional probability mass functions p _{ i }=P(X=iY=0) and q _{ i }=P(X=iY=1), for i=1,…,b. With uncertainty, we assume beta class priors and define the parameters θ _{0}={p _{1},p _{2},…,p _{ b−1}} and θ _{1}={q _{1},q _{2},…,q _{ b−1}}. The bin probabilities must be valid. Thus, {p _{1},p _{2},…,p _{ b−1}}∈Θ _{0} if and only if 0≤p _{ i }≤1 for i=1,…,b−1 and \(\sum _{i=1}^{b1}p_{i} \leq 1\), in which case, \(p_{b}=1\sum _{i=1}^{b1}p_{i}\). We use the Dirichlet priors
where \(\alpha _{i}^{y}>0\). These are conjugate priors, leading to the posteriors of the same form. The effective classconditional densities are
for y=0,1, and the OBC is given by
where \(U_{j}^{y}\) denotes the observed count for class y in bin j [40]. Hereafter, \(\sum _{i=1}^{b}\alpha _{i}^{y}\) is represented by \(\alpha _{0}^{y}\), i.e. \(\alpha _{0}^{y} = \sum _{i=1}^{b}\alpha _{i}^{y}\), and is called the precision factor. In the sequel, the sub(super)script relating to dependency on class y may be dropped; nonetheless, availability of prior knowledge for both classes is assumed.
Multinomial mixture model
In practice, data may not be labeled, due to potential tumortissue sample or stage heterogeneity, but still we want to classify a new sample point. A mixture model is a natural model for this scenario, assuming each sample point x _{ i } arises from a mixture of multinomial distributions:
where M is the number of components. When there exists two components, similar to binary classification, M=2. The conjugate prior distribution family for component probabilities (if unknown) is the Dirichlet distribution. In the mixture model, no closedform analytical posterior distribution for the parameters exists, but Markov chain Monte Carlo (MCMC) methods [43] can be employed to numerically calculate the posterior distributions. Since the conditional distributions can be calculated analytically in the multinomial mixture model, Gibbs sampling [44, 45] can be employed for the Bayesian inference. If the prior probability distribution over the component probability vector (c=[c _{0},c _{1},…,c _{ M }]) is a Dirichlet distribution \(\mathcal {D}(\boldsymbol {\phi })\) with parameter vector ϕ, the componentconditional probabilities are \(\boldsymbol {\theta }_{j}=[ p^{j}_{1},p^{j}_{2},\ldots,p^{j}_{b}]\), and the prior probability distribution over them is Dirichlet \(\mathcal {D}(\boldsymbol {\alpha ^{j}})\) with parameter vector α ^{j} (as in the classification problem), for j=1,…,M, the Gibbs updates are
where the superscript in parentheses denotes the chain iteration number, I _{ w } is one if w is true, and otherwise I _{ w } is zero. In this framework, if the inference chain runs for Is iterations, then the numerical approximation of the OBC classification rule is
Without loss of generality the summation above can be over the iterations of the chain considering burnin and thinning.
Prior construction: general framework
In this section, we propose a general framework for prior construction. We begin with introducing a knowledgedriven prior probability:
Definition 1
(Maximal Knowledgedriven Information Prior) If Π is a family of proper priors, then a maximal knowledgedriven information prior (MKDIP) is a solution to the following optimization problem:
where C _{ θ }(ξ,D)is a cost function that depends on (1) θ: the random vector parameterizing the underlying probability distribution, (2) ξ: state of (prior) knowledge, and (3) D: partial observation (part of the sample data).
Alternatively, by parameterizing the prior probability as π(θ;γ), with γ∈Γ denoting the hyperparameters, an MKDIP can be found by solving
In contrast to noninformative priors, the MKDIP incorporates available prior knowledge and even part of the data to construct an informative prior.
The MKDIP definition is very general because we want a general framework for prior construction. The next definition specializes it to cost functions of a specific form in a constrained optimization.
Definition 2
(MKDIP: Constrained Optimization with Additive Costs) As a special case in which C _{ θ } can be decomposed into additive terms, the cost function is of the form:
where β is a nonnegative regularization parameter. In this case, the MKDIP construction with additive costs and constraints involves solving the following optimization problem:
where \(g^{(3)}_{\boldsymbol {\theta },i}\), ∀i∈{1,…,n _{ c }}, are constraints resulting from the state of knowledge ξvia a mapping:
In the sequel, we will refer to g ^{(1)}(·) and g ^{(2)}(·) as the cost functions, and \(g_{i}^{(3)}(\cdot)\)’s as the knowledgedriven constraints. We begin with introducing informationtheoretic cost functions, and then we propose a general set of mapping rules, denoted by \(\mathcal {T}\) in Definition 2, to convert biological pathway knowledge into mathematical forms. We then consider special cases with informationtheoretic cost functions.
Informationtheoretic cost functions
Instead of having least squares (or meansquared error) as the standard cost functions in classical statistical inference problems, there is no universal cost function in the prior construction literature. That being said, in this paper, we utilize several widely used cost functions in the field:

1.
(Maximum Entropy) The principle of maximumentropy (MaxEnt) for probability construction [38] leads to the least informative prior given the constraints in order to prevent adding spurious information. Under our general framework MaxEnt can be formulated by setting:
$$\beta=0,~g_{\boldsymbol{\theta}}^{(1)} = H[\boldsymbol{\theta}], $$where H[.] denotes the Shannon entropy.

2.
(Maximal Data Information) The maximal data information prior (MDIP) introduced by Zellner [46] as a choice of the objective function is a criterion for the constructed probability distribution to remain maximally committed to the data [47]. To achieve MDIP, we can set our general framework with:
$$\begin{aligned} \beta=0,~g_{\boldsymbol{\theta}}^{(1)} &= \ln \pi(\boldsymbol{\theta};\boldsymbol{\gamma}) + H[P(x\boldsymbol{\theta})]\\ &= \ln \pi(\boldsymbol{\theta};\boldsymbol{\gamma})E_{x\boldsymbol{\theta}}[\ln P(x\boldsymbol{\theta})]. \end{aligned} $$ 
3.
(Expected Mean Loglikelihood) The cost function introduced in [35] is the first one that utilizes part of the observed data for prior construction. In that, we have
$$\beta =1,~g^{(2)}_{\boldsymbol{\theta}}=\ell(\boldsymbol{\theta};D), $$where \(\ell (\boldsymbol {\theta };D)=\frac {1}{n_{D}}\sum _{i=1}^{n_{D}}\log f(\boldsymbol {x}_{i}\boldsymbol {\theta })\) is the mean loglikelihood function of the sample points used for prior construction (D), and n _{ D } denotes the number of sample points in D. In [35], it is shown that this cost function is equivalent to the average KullbackLeibler distance between the unknown distribution (empirically estimated by some part of the samples) and the uncertainty class of distributions.
As originally proposed, the preceding approaches did not involve expectation over the uncertainty class. They were extended to the general prior construction form in Definition 1, including the expectation, in [36] to produce the regularized maximum entropy prior (RMEP), the regularized maximal data information prior (RMDIP), and the regularized expected mean loglikelihood prior (REMLP). In all cases, optimization was subject to specialized constraints.
For MKDIP, we employ the same informationtheoretic cost functions in the prior construction optimization framework. MKDIPE, MKDIPD, and MKDIPR correspond to using the same cost functions as REMP, RMDIP, and REMLP, respectively, but with the new general types of constraints. To wit, we employ functional information from the signaling pathways and show that by adding these new constraints that can be readily derived from prior knowledge, we can improve both supervised (classification problem with labelled data) and unsupervised (mixture problem without labels) learning of Bayesian operators.
From prior knowledge to mathematical constraints
In this part, we present a general formulation for mapping the existing knowledge into a set of constraints. In most scientific problems, the prior knowledge is in the form of conditional probabilities. In the following, we consider a hypothetical gene network and show how each component in a given network can be converted into the corresponding inequalities as general constraints in MKDIP optimization.
Before proceeding we would like to say something about contextual effects on regulation. Because a regulatory model is not independent of cellular activity outside the model, complete control relations such as A→B in the model, meaning that gene B is upregulated if and only if gene A is upregulated (after some time delay), do not necessarily translate into conditional probability statements of the form P(X _{ B }=1X _{ A }=1)=1, where X _{ A } and X _{ B } represent the binary gene values corresponding to genes A and B, respectively. Rather, what may be observed is P(X _{ B }=1X _{ A }=1)=1−δ, where δ>0. The pathway A→B need not imply P(X _{ B }=1X _{ A }=1)=1 because A→B is conditioned on the context of the cell, where by context we mean the overall state of the cell, not simply the activity being modeled. δ is called a conditioning parameter. In an analogous fashion, rather than P(X _{ B }=1X _{ A }=0)=0, what may be observed is P(X _{ B }=1X _{ A }=0)=η, where η>0, because there may be regulatory relations outside the model that upregulate B. Such activity is referred to as crosstalk and η is called a crosstalk parameter. Conditioning and crosstalk effects can involve multiple genes and can be characterized analytically via contextdependent conditional probabilities [48].
Consider binary gene values X _{1},X _{2},…,X _{ m } corresponding to genes g _{1},g _{2},…,g _{ m }. There are m2^{m−1} conditional probabilities of the form
to serve as constraints, the chosen constraints to be the conditional probabilities whose values are known (approximately). For instance, if g _{2} and g _{3} regulate g _{1}, with X _{1}=1 when X _{2}=1 and X _{3}=0, then, ignoring context effects,
for all k _{4},…,k _{ m }. If, however, we take context conditioning into effect, then
where δ _{1}(1,0,k _{4},…,k _{ m }) is a conditioning parameter.
Moreover, ignoring context effects,
for all k _{4},…,k _{ m }. If, however, we take crosstalk into effect, then
where η _{1}(1,1,k _{4},…,k _{ m }), η _{1}(0,0,k _{4},…,k _{ m }), and η _{1}(0,0,k _{4},…,k _{ m }) are crosstalk parameters. In practice it is unlikely that we would know the conditioning and crosstalk parameters for all combinations of k _{4},…,k _{ m }; rather, we might just know the average, in which case, δ _{1}(1,0,k _{4},…,k _{ m }) reduces to δ _{1}(1,0), η _{1}(1,1,k _{4},…,k _{ m }) reduces to η _{1}(1,1), etc.
In this paradigm, the constraints resulting from our state of knowledge are of the following form:
The basic setting is very general and the conditional probabilities are what they are, whether or not they can be expressed in the regulatory form of conditioning or crosstalk parameters. The general scheme includes previous constraints and approaches proposed in [35] and [36] for the Gaussian and discrete setups. Moreover, in those we can drop the regulatoryset entropy because it is replaced by the set of conditional probabilities based on the regulatory set, whether forward (master predicting slaves) or backwards (slaves predicting masters) [48].
In this paradigm, the optimization constraints take the form
where the expectation is with respect to the uncertainty in the model parameters, that is, the distribution of the model parameter θ, and ε _{ i } is a slackness variable. Not all will be used, depending on our prior knowledge. In fact, the general conditional probabilities will not likely be used because they will likely not be known when there are too many conditioning variables. For instance, we may not know the probability in Eq. (13), but may know the conditioning on part of the variables which can be extracted from some interaction network (e.g. biological pathways). A slackness variable can be considered for each constraint to make the constraint framework more flexible, thereby allowing potential error or uncertainty in prior knowledge (allowing potential inconsistencies in prior knowledge). When using slackness variables, these variables also become optimization parameters, and a linear function (summation of all slackness variables) times a regulatory coefficient is added to the cost function of the optimization in Eq. (12). In other words, when having slackness variables, the optimization in Eq. (12) can be written as
where λ _{1} and λ _{2} are nonnegative regularization parameters, and ε and \(\mathcal {E}\) represent the vector of all slackness variables and the feasible region for slackness variables, respectively. For each slackness variable, a possible range can be defined (note that all slackness variables are nonnegative). The higher the uncertainty is about a constraint stemming from prior knowledge, the greater the possible range for the corresponding slackness variable can be (more on this in the “Results and discussion” section).
The new general type of constraints discussed here introduces a formal procedure for incorporating prior knowledge. It allows the incorporation of knowledge of the functional regulations in the signaling pathways, any constraints on the conditional probabilities, and also knowledge of the crosstalk and conditioning parameters (if present), unlike the previous work in [36] where only partial information contained in the edges of the pathways is used in an ad hoc way.
An illustrative example and connection with conditional entropy
Now, consider a hypothetical network depicted in Fig. 2. For instance, suppose we know that the expression of gene g _{1} is regulated by g _{2}, g _{3}, and g _{5}. Then we have
As an example,
where the notation 1_{2} denotes 1 for the second gene. Further, we might not know a _{1}(k _{2},k _{3},k _{5}) for all combinations of k _{2}, k _{3}, k _{5}. Then we use the ones that we know. In the case of conditioning with g _{2}, g _{3}, and g _{5} regulating g _{1}, with g _{1} on if the others are on,
If limiting to 3gene predictors, g _{3}, and g _{5} regulate g _{1}, with g _{1} on if the other two are on, then
meaning that the conditioning parameter depends on whether X _{2}=0 or 1.
Now, considering the conditional entropy, assuming that \(\delta _{1} = \max _{(k_{2},k_{3},k_{5})} \delta _{1}(k_{2},k_{3},k_{5})\) and δ _{1}<0.5, we may write
where h(δ)=−[δ log(δ)+(1−δ) log(1−δ)]. Hence, bounding the conditional probabilities, the conditional entropy is in turn bounded by h(δ _{1}):
It should be noted that constraining H[X _{1}X _{2},X _{3},X _{5}] would not necessarily constrain the conditional probabilities, and may be considered as a more relaxed type of constraints. But, for example, in cases where there is no knowledge about the status of a gene given its regulator genes, constraining entropy is the only possible approach.
In our illustrative example, if we assume that the Boolean regulating function of X _{1} is known as shown in Fig. 2 and context effects exist, then the following knowledge constraints can be extracted from the pathway and regulating function:
Now if we assume that the context does not affect the value of X _{1}, i.e. the value of X _{1} can be fully determined by knowing the values of X _{2}, X _{3}, and X _{5}, then we have the following equations:
It can be seen from the equations above that for some setups of the regulator values, only a subset of them determines the value of X _{1}, regardless of the other regulator values. If we assume that the value of X _{5} cannot be observed, for example X _{5} is an extracellular signal that cannot be measured in gene expression data and thereafter X _{5} is not in the features of our data, the only constraints relevant to the featurelabel distribution that can be extracted from the regulating function knowledge will be
Special case of Dirichlet distribution
Fixing the value of a single gene, being ON or OFF (i.e. X _{ i }=0 or X _{ i }=1, respectively), corresponds to a partition of the states, \(\mathcal {X }=\{1,\dots,b\}\). Here, the portions of \(\mathcal {X}\) for which (X _{ i }=k _{1},X _{ j }=k _{2}) and (X _{ i }≠k _{1},X _{ j }=k _{2}), for any k _{1},k _{2}∈{0,1}, are denoted by \(\mathcal {X}^{i,j}(k_{1},k_{2})\) and \(\mathcal {X}^{i,j}(k_{1}^{c},k_{2})\), respectively. For the Dirichlet distribution, where θ=p and γ=α, the constraints on the expectation over the conditional probability in (15) can be explicitly written as functions of the prior probability parameters (hyperparameters). For the parameter of the Dirichlet distribution, a vector α indexed by \(\mathcal {X}\), we denote the variable indicating the summation of its entities in \(\mathcal { X}^{i,j}(k_{1},k_{2})\) by \(\overline {\alpha }^{i,j}(k_{1},k_{2})=\sum _{k \in \mathcal {X}^{i,j}(k_{1},k_{2})}\alpha _{k}\). The notation can be easily extended for the cases having more than two fixed genes. In this setup, if the set of random variables corresponding to genes other than g _{ i } and the vector of their corresponding values are shown by \(\tilde {X}_{i}\) and \(\tilde {X}_{i}\), respectively, the expectation over the conditional probability in (15) is [36]:
where the summation in the numerator and the first summation in the denominator of the second equality is over the states (bins) for which (\(X_{i} = k_{i}, \tilde {X}_{i} = \tilde {x}_{i}\)), and the second summation in the denominator is over the states (bins) for which (\(X_{i} = k_{i}^{c}, \tilde {X}_{i} = \tilde {x}_{i}\)).
If there exists a set of genes that completely determines the value of gene g _{ i } (or only a specific setup of their values that determines the value, as we had in our illustrative example in Eq. (17)), then the constraints on the conditional probability conditioned on all the genes other than g _{ i } can be changed to be conditioned on that set only. Specifically, let R _{ i } denote the set of random variables corresponding to such a set of genes/proteins and suppose there exists a specific setup of their values r _{ i } that completely determines the value of gene g _{ i }. If the set of all random variables corresponding to the genes/proteins other than X _{ i } and R _{ i } is denoted by \(\boldsymbol {B}_{i}=\tilde {X}_{(i,\boldsymbol {R}_{i})}\), and their corresponding values by b _{ i }, then the constraints on the conditional probability can be written as
where \(\phantom {\dot {i}\!}O_{\boldsymbol {B}_{i}}\) is the set of all possible vectors of values for B _{ i }.
For a multinomial model with a Dirichlet prior distribution, a constraint on the conditional probabilities translates into a constraint on the above expectation over the conditional probabilities (as in Eq. (15)). In our illustrative example and from the equations in Eq. (17), there are four of these constraints on the conditional probability for gene g _{1}. For example, in the second constraint from the second line of Eq. (17) (Eq. 17b), X _{ i }=X _{1}, k _{ i }=0, R _{ i }={X _{3}}, r _{ i }=[0], and B _{ i }={X _{2},X _{5}}. One might have several constraints for each gene extracted from its regulatory function (more on extracting general constraints from regulating functions in the “Results and discussion” section).
Results and discussion
The performance of the proposed general prior construction framework with different types of objective functions and constraints is examined and compared with other methods on two pathways, a mammalian cellcycle pathway and a pathway involving the gene TP53. Here we employ Boolean network modeling of genes/proteins (hereafter referred to as entities or nodes) [49] with perturbation (BNp). A Boolean Network with p nodes (genes/proteins) is defined as B=(V,F), where V represents the set of entities (genes/proteins) {v _{1},…,v _{ p }}, and F is the set of Boolean predictor functions {f _{1},…,f _{ p }}. At each step in a BNp, a decision is made by a Bernoulli random variable with the success probability equal to the perturbation probability, p _{ pert }, as to whether a node value is determined by perturbation of randomly flipping its value or by the logic model imposed from the interactions in the signaling pathways. A BNp with a positive perturbation probability can be modeled by an ergodic Markov chain, and possesses a steadystate distribution (SSD) [50]. The performance of different prior construction methods can be compared based on the expected true error of the optimal Bayesian classifiers designed with those priors, and also by comparing these errors with some other well known classification methods. Another comparison metric of prior construction methods is the expected norm of the difference between the true parameters and the posterior mean of these parameters inferred using the constructed prior distributions. Here, the true parameters are the vectors of the true classconditional SSDs, i.e. the vectors of the true classconditional bin probabilities of the BNp.
Moreover, the performance of the proposed framework is compared with other methods on a publicly available gene expression dataset of nonsmall cell lung cancer when combined with the existing prior knowledge on relevant signaling pathways.
Mammalian cell cycle classification
A Boolean logic regulatory network for the dynamical behavior of the cell cycle of normal mammalian cells is proposed in [51]. Figure 3(a) shows the corresponding pathways. In normal cells, cell division is coordinated via extracellular signals controlling the activation of CycD. Rb is a tumor suppressor gene and is expressed when the inhibitor cyclins are not present. Expression of p27 blocks the action of CycE or CycA, and lets the tumorsuppressor gene Rb be expressed even in the presence of CycE and CycA, and results in a stop in the cell cycle. Therefore, in the wildtype cellcycle network, expressing p27 lets the cell cycle stop. But following the proposed mutation in [51], for the mutated case, p27 is always inactive (i.e. can never be activated), thereby creating a situation where both CycD and Rb might be inactive and the cell can cycle in the absence of any growth factor.
The full functional regulations in the cellcycle Boolean network are shown in Table 1.
Following [36], for the binary classification problem, y=0 corresponds to the normal system functioning based on Table 1, and y=1 corresponds to the mutated (cancerous) system where CycD, p27, and Rb are permanently downregulated (are stuck at zero), which creates a situation where the cell cycles even in the absence of any growth factor. The perturbation probability is set to 0.01 and 0.05 for the normal and mutated system, respectively. A BNp has a transition probability matrix (TPM), and as mentioned earlier, with positive perturbation probability can be modeled by an ergodic Markov chain, and possesses a SSD [50]. Here, each class has a vector of steadystate bin probabilities, resulting from the regulating functions of its corresponding BNp and the perturbation probability. The constructed SSDs are further marginalized to a subset of seven genes to prevent trivial classification scenarios. The final feature vector is x=[E2F,CycE,CycA,Cdc20,Cdh1,UbcH10,CycB], and the state space size is 2^{7}=128. The true parameters for each class are the final classconditional steadystate bin probabilities, p ^{0} and p ^{1} for the normal and mutated systems, respectively, which are utilized for taking samples.
Classification problem corresponding to TP53
TP53 is a tumor suppressor gene involved in various biological pathways [36]. Mutated p53 has been observed in almost half of the common human cancers [52], and in more than 90% of patients with severe ovarian cancer [53]. A simplified pathway involving TP53, based on logic in [54], is shown in Fig. 3(b). DNA doublestrand break affects the operation of these pathways, and the Boolean network modeling of these pathways under this uncertainty has been studied in [53, 54]. The full functional regulations are shown in Table 2.
Following [36], two scenarios, dnadsb=0 and dnadsb=1, weighted by 0.95 and 0.05, are considered and the SSD of the normal system is constructed based on the ergodic Markov chain model of the BNp with the regulating functions in Table 2 by assuming the perturbation probability 0.01. The SSD for the mutated (cancerous) case is constructed by assuming a permanent down regulation of TP53 in the BNp, and perturbation probability 0.05. Knowing that dnadsb is not measurable, and to avoid trivial classification situations, the SSDs are marginalized to a subset of three entities x=[ATM,Wip1,Mdm2]. The state space size in this case is 2^{3}=8. The true parameters for each class are the final classconditional steadystate bin probabilities, p ^{0} and p ^{1} for the normal and mutated systems, respectively, which are used for data generation.
Extracting general constraints from regulating functions
If knowledge of the regulating functions exists, it can be used in the general constraint framework of the MKDIP, i.e. it can be used to constrain the conditional probabilities. In other words, the knowledge about the regulating function of gene i can be used to set ε _{ i }(k _{1},…,k _{ i−1},k _{ i+1},…,k _{ m }), and \(a^{k_{i}}_{i}(k_{1},\dots, k_{i1}, k_{i+1},\dots, k_{m})\) in the general form of constraints in (15). If the true regulating function of gene i is known, and it is not context sensitive, then the conditional probability of its status, \(a^{k_{i}}_{i}(k_{1},\dots, k_{i1}, k_{i+1},\dots, k_{m})\), is known for sure, and δ _{ i }(k _{1},…,k _{ i−1},k _{ i+1},…,k _{ m })=0. But in reality, the true regulating functions are not known, and are also context sensitive. The dependence on the context translates into δ _{ i }(k _{1},…,k _{ i−1},k _{ i+1},…,k _{ m }) being greater than zero. The greater the context effect on the gene status, the larger δ _{ i } is. Moreover, the uncertainty over the regulating function is captured by the slackness variables ε _{ i }(k _{1},…,k _{ i−1},k _{ i+1},…,k _{ m }) in Eq. (15). In other words, the uncertainty is translated to the possible range of the slackness variable values in the prior construction optimization framework. The higher the uncertainty is, the greater the range should be in the optimization framework. In fact, slackness variables make the whole constraint framework consistent, even for cases where the conditional probability constraints imposed by prior knowledge are not completely in line with each other, and guarantee the existence of a solution.
As an example, for the classification problems of the mammalian cellcycle network and the TP53 network, assuming the regulating functions in Tables 1 and 2 are the true regulating functions, the context effect can be observed in the dependence of the output of the Boolean regulating functions in the tables on the extracellular signals, nonmeasurable entities, and the genes that have been marginalized out in our setup. In the absence of quantitative knowledge about the context effect, i.e. \(a^{k_{i}}_{i}(k_{1},\dots, k_{i1}, k_{i+1},\dots, k_{m})\) for all possible setups of the regulator values, one can impose only those with such knowledge. For example, in the mammalian cellcycle network, CycB’s regulating function only depends on the values included in the observed feature set; therefore the conditional probabilities are known for all regulator value setups. But for CycE the regulating function depends on Rb, which is marginalized out in our feature set, and also itself depends on an extracellular signal. Hence, the conditional probability constraints for CycE are known only for the setup of the features that determine the output of the Boolean regulating function independent of the other regulator values.
In our comparison analysis, \(a^{k_{i}}_{i}(k_{1},\allowbreak \dots, k_{i1},\allowbreak k_{i+1}, \allowbreak \dots,\allowbreak k_{m})\) for each gene/protein in Eq. (15) is set to one for the feature value setups that determine the Boolean regulating output regardless of the context. But since the observed data are not fully described by these functions, and the system has uncertainty, we let the possible range for the slackness variables in Eq. (15) be [0,1).
We now continue the examples on two of the mammalian cellcycle network nodes, CycB and CycE. For CycB the following constraints on the prior distribution are extracted from its regulating function:
For CycE, one of its regulators is Rb (v _{2}), which is not included in the feature set, i.e. not observed, but is known to be downregulated in the mutated (cancerous) case. Thus, the set of constraints extracted from the regulating function of CycE for the normal case includes only
and for the mutated case consists of
As another example, for the TP53 network, the set of constraints extracted from the regulating functions in Table 2 for the normal case are shown in the left panel of Table 3.
The first and second constraints for MKDIP in the left panel of Table 3 come from the regulating function of v _{2} in Table 2. Although v _{1} is an extracellular signal, the value of v _{4} imposes two constraints on the value of v _{2}. But the regulating function of v _{4} in Table 2 only depends on v _{3}, which is not included in our feature set, so we have no imposed constraints on the conditional probability from its regulating function. The other two constraints for MKDIP in the left panel of Table 3 are extracted from the regulating function of v _{5} in Table 2. Although v _{3} is not included in the observed features, for two setups of its regulators, (v _{2}=1) and (v _{2}=0,v _{4}=1), the value of v _{5} can be determined, so the constraint is imposed on the prior distribution from the regulating function. For comparison, the constraints extracted from the pathway in Fig. 3(b) based on the method of [36] are provided in the right panel of Table 3.
Performance comparison in classification setup
For both the mammalian cell cycle and TP53 problems, the performance of 11 methods are compared for classification performance. OBC with the Jeffreys’ prior, OBC with our previous prior construction methods in [36] (RMEP, RMDIP, REMLP), OBC with our proposed general framework of constraints (MKDIPE, MKDIPD, MKDIPR), and also well known methods including Histogram rule (Hist), CART [55], Random Forest (RF)[56], and Support Vector Machine classification (SVM) [57, 58]. Also, for all the Bayesian methods using OBC, the posterior mean of the parameters’ distance from the true parameters is calculated and compared. The samples from the true distributions are stratified fixing two different class prior probabilities. Following [36], we assume that \(\max _{i}p_{i}^{y,true},~{for} ~y\in \{0,1\}\), is known within a +/−5% interval (can come from existing population statistics in practice). Two simulation scenarios are performed: one assuming the complete knowledge of the optimal precision factors [36] \(\alpha _{0}^{y}=\sum _{i=1}^{b}\alpha _{i}^{y}, y\in \{0,1\}\) for prior construction methods (oracle precision factor); and the other estimating the optimal precision factor from the observed data itself. Two class prior probabilities, c=0.6 and c=0.5, are considered. Along with the true classconditional SSDs of the two classes, the corresponding Bayes error corresponds to the best performance that any classification rule for that classification problem (featurelabel distribution) can yield. Fixing c and the true classconditional bin probabilities, n sample points by stratified sampling (n _{0}=⌈c n⌉ sample points from class 0 and n _{1}=n−n _{0} sample points from class 1) are taken for prior construction (if used by the method), classifier training, and posterior distribution calculations. Then the designed classifier’s true classification error is calculated for all classification methods. The posterior mean of parameter distance from the true parameter (true steadystate bin probabilities vector) is calculated based on \(\sum _{y=0}^{1}\boldsymbol {\alpha }^{y\ast }/\alpha _{0}^{y\ast }\boldsymbol {p}^{y}^{2}\), where α ^{y∗} and p ^{y} represent the parameters of the posterior distribution and true bin probabilities vector for class y, respectively. For each fixed c and n, 800 Monte Carlo repetitions are done to calculate the expected classification errors and posterior distances from the true parameters for each parameter setup. For REMLP and MKDIPR, which use a fraction of data in their prior construction procedure, 10 data points from each class are used for prior construction, and all for the inference and posterior calculation (here the number of data points used for prior construction is not finetuned, but a small number is chosen to avoid overfitting). The overall procedure taken for a fixed classification problem and a fixed sample size (fixed n) in each Monte Carlo repetition is as follows:

The true bin probabilities p ^{0} and p ^{1} are fixed.

n _{0} and n _{1} are determined using c as n _{0}=⌈c n⌉ and n−n _{0}.

Observations (training data) are randomly sampled from the multinomial distribution for each class, i.e. \((U^{y}_{1},\ldots,U^{y}_{b})\sim \mathcal {M}ult(\boldsymbol {p}^{y};n_{y})\), for y∈{0,1}.

10 data points are randomly taken from the training data points of each class to be used in the prior construction methods that utilize partial data (REMLP and MKDIPR)

All the classification rules are trained based on their constructed prior (if applicable to that classification rule) and the training data.

The classification errors associated with the classifiers are computed using p ^{0} and p ^{1}. Also for the Bayesian methods, the posterior probability mass (mean) distance from the true parameters (true bin probabilities, p ^{0} and p ^{1}) is calculated.
The regularization parameter λ _{1} is set to 0.5, and λ _{2} is set to 0.25 and 0.5 for the mammalian cell cycle classification problem and the TP53 classification problem, respectively. The results of expected classification error and posterior mean distance from the true parameters for the mammalian cellcycle network are shown in Tables 4 and 6, respectively. Tables 5 and 7 contain the results of expected classification error and posterior mean distance from the true parameters for the TP53 network.
The best performance (with the lowest error in Tables 4 and 5, and lowest distance in Tables 6 and 7) for each sample size, are written in bold. For the mammalian cellcycle network, MKDIP methods show the best (or as good as the best) performance in all the scenarios in terms of both the expected classification error and posterior parameter estimates. For the TP53 network, MKDIP methods show the best performances in posterior parameter estimates, and are competitive with the previous knowledgedriven prior construction methods in terms of the expected classification error.
Performance comparison in mixture setup
The performance of the OBC with different prior construction methods, including OBC with the Jeffreys’ prior, OBC with prior constructions methods of [36] (RMEP, RMDIP, REMLP), and OBC with the general framework of constraints (MKDIPE, MKDIPD, MKDIPR), are further compared in the mixture setup with missing labels, for both the mammalian cellcycle and the TP53 systems. Also, the OBC with prior distribution centered on the true parameters with a relatively low variance (hereinafter abbreviated as PDCOTP method in Tables 8 and 9) is considered as the comparison baseline, though it is not a practical method. Similar to the classification problems, we assume that only two components (classes) exist, normal and mutated (cancerous). Here, c _{0} is fixed at 0.6 (c _{1}=1−c _{0}=0.4), but the sampling is not stratified. The componentconditional SSDs (bin probabilities) for the two components are as before in the classification problem, i.e. the same as the classconditional SSDs in the classification problem.
For each sample point, first the label (y) is generated from a Bernoulli distribution with success probability c _{1}, and then the bin observation is generated given the label, from the corresponding classconditional SSD (class conditional bin probabilities vector, p ^{y}), i.e. the bin observation is a sample from a categorical distribution with parameter vector p ^{y} but the label is hidden for the inference chain and classifier training. n sample points are generated and fed into the Gibbs inference chain with different priors from the different prior construction methods. Then the OBC is calculated based on Eq. 9. For each sample size, 400 Monte Carlo repetitions are done to calculate the expected true error and the error of classifying the unlabeled observed data used for the inference itself.
To have a fair comparison of different methods’ classconditional prior probability construction, we assume that we have a rough idea of the mixture weights (class probabilities). In practice this can come from existing population statistics. That is, the Dirichlet prior distribution over the mixture weights (class probabilities) parameters, ϕ in \(\mathcal {D}(\boldsymbol {\phi })\), are sampled in each iteration from a uniform distribution that is centered on the true mixture weights vector +/−10% interval, and fixed for all the methods in that repetition. For the REMLP and MKDIPR that need labeled data in their prior construction procedure, the predicted labels from using the Jeffreys’ prior are used and one fourth of the data points are used in prior construction for these two methods, and all for inference. The reason for using a larger number of data points in prior construction within the mixture setup compared to the classification setup is that in the mixture setup, data points are missing their true class labels, and the initial label estimates may be inaccurate. One can use a relatively larger number of data points in prior construction, which still avoids overfitting. The regularization parameters λ _{1} and λ _{2} are set as in the classification problem. Optimal precision factors are used for all prior construction methods. The results are shown in Tables 8 and 9 for the mammalian cellcycle and TP53 models, respectively. The best performance (lowest error) for each sample size and the best performance among practical methods (all other than PDCOTP), if different, is written in bold. As can be seen from the tables, in most cases the MKDIP methods have the best performance among the practical methods. With larger sample sizes, MKDIPR even outperforms PDCOTP in the mammalian cellcycle system.
Performance comparison on a real data set
In this section the performance of the proposed methods are examined on a publicly available gene expression dataset. Here, we have considered the classification of two subtypes of nonsmall cell lung cancer (NSCLC), lung adenocarcinoma (LUA) versus lung squamous cell carcinoma (LUS). Lung cancer is the second most commonly diagnosed cancer and the leading cause of cancer death in both men and women in the United States [59]. About 84% of lung cancers are NSCLC [59] and LUA and LUS combined account for about 70% of lung cancers based on the American Cancer Society statistics for NSCLC. We have downloaded LUA and LUS datasets (both labeled as TCGA provisional) in the form of mRNA expression zscores (based on RNASeq profiling) from the public database cBioPortal [60, 61] for the patient sets tagged as “All Complete Tumors", denoting the set of all tumor samples that have mRNA and sequencing data. The two datasets for LUA and LUS consist of 230 and 177 sample points, respectively. We have quantized the data into binary levels based on the following preprocessing steps. First, to remove the bias for each patient, each patient’s data are normalized by the mean of the zscores of a randomly selected subset from the list of the recurrently mutated genes (half the size of the list) from the MutSig [62] (directly provided by cBioPortal). Then, a two component Gaussian mixture model is fit to each gene in each data set, and the normalized data are quantized by being assigned to one component, namely 0 or 1 (1 being the component with higher mean). We confine the feature set to {EGFR,PIK3CA,AKT,KRAS,RAF1,BAD,P53,BCL2} which are among the genes in the most relevant signaling pathways to the NSCLC [63]. These genes are altered, in different forms, in 86% and 89% of the sequenced LUA and LUS tumor samples on the cBioPortal, respectively. There are 256 bins in this classification setting, since the feature set consists of 8 genes. The pathways relevant to the NSCLC classification problem considered here are collected from KEGG [64, 65] Pathways for NSCLC and PI3KAKT signaling pathways, and also from [63], as shown in Fig. 4. The corresponding regulating functions are shown in Table 10.
The informative prior construction methods utilize the knowledge in the pathways in Fig. 4, and the MKDIP methods also use the regulating relationships in Table 10 in order to construct prior distributions. The incidence rate of the two subtypes, LUA and LUS, varies based on demographic factors. Here, we approximate the class probability c=P(Y=LUA) as c≈0.57, based on the latest statistics of the American Cancer Society for NSCLC, and also based on a weighted average of the rates for 11 countries given in [66]. In each Monte Carlo repetition, n sample points by stratified sampling, i.e. n _{0}=⌈c n⌉ and n _{1}=n−n _{0} sample points, are randomly taken from preprocessed LUA (class 0) and LUS (class 1) datasets, respectively, for prior construction (if used by the method) and classifier training, and the rest of the sample points are held out for error estimation. For each n, 400 Monte Carlo repetitions are done to calculate the expected classification error. In the prior construction methods, first the optimization is solved for both classes with the precision factors \(\alpha ^{y}_{0}=200, y\in \{0,1\}\), and then their optimal values are estimated using the training points. For REMLP and MKDIPR, which use a fraction of the training data in their prior construction procedure, min(20,max(6,⌊0.25n _{ y }⌋)) sample points from the training data of each class (y∈{0,1}) are used for prior construction, and all the training data are used for inference. The regularization parameters λ _{1} and λ _{2} are set to 0.5 and 0.25, respectively. The results are shown in Table 11. In the table, the best performance among Hist, CART, RF and SVM is shown as Best Non Bayesian method. Best RM represents the best performance among RMEP, RMDIP, and REMLP. Best MKDIP denotes the best performance among the MKDIP methods.
The best performing rule for each sample size is written in bold. As can be seen from the table, OBC with MKDIP prior construction methods has the best performance among the classification rules. It is also clear that the classification performance can be significantly improved when pathway prior knowledge is integrated for constructing prior probabilities, especially when the sample size is small.
Implementation remarks
The results presented in this paper are based on Monte Carlo simulations, where thousands of optimization problems are solved for each sample size for each problem. Thus, the regularization parameters and the number of sample points used in prior construction are preselected for each problem. One can use cross validation to set these parameters in a specific application. It has been shown in [36] that by assuming precision factors greater than 1 (\(\alpha _{0}^{y}>1, y\in \{0,1\}\)), all three objective functions used are convex for the class of Dirichlet prior probabilities for multinomial likelihood functions. But unfortunately, we cannot guarantee the convexity of the feasible space due to the convolved constraints. Therefore, we have employed algorithms for nonconvex optimization problems and there is no guarantee of convergence to the global optimum. The method used for solving the optimization framework of the prior construction is based on the interiorpoint algorithm for nonlinear constrained optimization [67, 68] implemented in the fmincon function in MATLAB. In this paper, since the interest is in classification problems with small training sample sizes (which is often the case in bioinformatics) and also due to Monte Carlo simulations, we have only shown performance results on small networks with only a few genes. In practice, there would be no problem using the proposed method for larger networks, since there would then be a single onetime analysis. One should also note that with small sample sizes, one needs feature selection to keep the number of features small. In the experiments in this paper, feature selection is automatically done by focusing on the most relevant network by biological prior knowledge.
Conclusion
Bayesian methods have shown promising performance in classification problems in the presence of uncertainty and small sample sizes, which often occur in translational genomics problems. The impediment in using these methods is prior construction to integrate existing prior biological knowledge. In this paper we have proposed a knowledgedriven prior construction method with a general framework of mapping prior biological knowledge into a set of constraints. Knowledge can come from biological signaling pathways and other population studies, and be translated into constraints over conditional probabilities. This general scheme includes the previous approaches of using biological prior knowledge in prior construction. Here, the superior performance of this general scheme is shown on two important pathway families, the mammalian cellcycle pathway and the pathway centering around TP53. In addition, prior construction and the OBC are extended to a mixture model, where data sets are with missing labels. Moreover, comparisons on a publicly available gene expression dataset show that classification performance can be significantly improved for small sample sizes when corresponding pathway prior knowledge is integrated for constructing prior probabilities.
References
 1
Dougherty ER, Zollanvari A, BragaNeto UM. The illusion of distributionfree smallsample classification in genomics. Current Genomics. 2011; 12(5):333.
 2
Dougherty ER, Dalton LA. Scientific knowledge is possible with smallsample classification. EURASIP J Bioinforma Syst Biol. 2013; 2013(1):1–12.
 3
Jaynes ET. What is the question? In: Bernardo JM, deGroot MH, Lindly DV, Smith AFM, editors. Bayesian Stat. Valencia: Valencia Univ. Press: 1980. p. 618–629.
 4
Jeffreys H. An invariant form for the prior probability in estimation problems. Proc Royal Soc London Ser A Math Phys Sci. 1946; 186(1007):453–61.
 5
Zellner A. Past and Recent Results on Maximal Data Information Priors. Working paper series in economics and econometrics. University of Chicago, Graduate School of Business, Department of Economics, Chicago. 1995.
 6
Rissanen J. A universal prior for integers and estimation by minimum description length. Ann Stat. 1983; 11(2):416–31.
 7
Rodríguez CC. Entropic priors. Albany: Department of Mathematics and Statistics, State University of New York; 1991.
 8
Berger JO, Bernardo JM. On the development of reference priors. Bayesian Stat. 1992; 4(4):35–60.
 9
Spall JC, Hill SD. Leastinformative Bayesian prior distributions for finite samples based on information theory. Autom Control IEEE Trans. 1990; 35(5):580–3.
 10
Bernardo JM. Reference posterior distributions for Bayesian inference. J Royal Stat Soc Ser B Methodol. 1979; 41(2):113–147.
 11
Kass RE, Wasserman L. The selection of prior distributions by formal rules. J Am Stat Assoc. 1996; 91(435):1343–1370.
 12
Berger JO, Bernardo JM, Sun D. Objective priors for discrete parameter spaces. J Am Stat Assoc. 2012; 107(498):636–48.
 13
Jaynes ET. Information theory and statistical mechanics. Physical Rev. 1957; 106(4):620.
 14
Jaynes ET. Prior probabilities. Syst Sci Cybern IEEE Trans. 1968; 4(3):227–41.
 15
Zellner A. Models, prior information, and Bayesian analysis. J Econ. 1996; 75(1):51–68.
 16
Burg JP, Luenberger DG, Wenger DL. Estimation of structured covariance matrices. Proc IEEE. 1982; 70(9):963–74.
 17
Werner K, Jansson M, Stoica P. On estimation of covariance matrices with kronecker product structure. Signal Proc IEEE Trans. 2008; 56(2):478–91.
 18
Wiesel A, Hero AO. Distributed covariance estimation in Gaussian graphical models. Signal Proc IEEE Trans. 2011; 60(1):211–220.
 19
Wiesel A, Eldar YC, Hero AO. Covariance estimation in decomposable Gaussian graphical models. Signal Process IEEE Trans. 2010; 58(3):1482–1492.
 20
Breslin T, Krogh M, Peterson C, Troein C. Signal transduction pathway profiling of individual tumor samples. BMC Bioinforma. 2005; 6(1):163.
 21
Zhu Y, Shen X, Pan W. Networkbased support vector machine for classification of microarray samples. BMC Bioinforma. 2009; 10(1):21.
 22
Svensson JP, Stalpers LJ, Esveldt–van Lange RE, Franken NA, Haveman J, Klein B, Turesson I, Vrieling H, GiphartGassler M. Analysis of gene expression using gene sets discriminates cancer patients with and without late radiation toxicity. PLoS Med. 2006; 3(10):422.
 23
Lee E, Chuang HY, Kim JW, Ideker T, Lee D. Inferring pathway activity toward precise disease classification. PLoS Comput Biol. 2008; 4(11):1000217.
 24
Su J, Yoon BJ, Dougherty ER. Accurate and reliable cancer classification based on probabilistic inference of pathway activity. PLoS ONE. 2009; 4(12):8161.
 25
Eo HS, Heo JY, Choi Y, Hwang Y, Choi HS. A pathwaybased classification of breast cancer integrating data on differentially expressed genes, copy number variations and microrna target genes. Mol Cells. 2012; 34(4):393–8.
 26
Wen Z, Liu ZP, Yan Y, Piao G, Liu Z, Wu J, Chen L. Identifying responsive modules by mathematical programming: An application to budding yeast cell cycle. PloS ONE. 2012; 7(7):41854.
 27
Kim S, Kon M, DeLisi C, et al. Pathwaybased classification of cancer subtypes. Biology direct. 2012; 7(1):1–22.
 28
Khunlertgit N, Yoon BJ. Identification of robust pathway markers for cancer through rankbased pathway activity inference. Advances Bioinforma. 2013; Article ID 618461:8.
 29
Wei P, Pan W. Incorporating gene networks into statistical tests for genomic data via a spatially correlated mixture model. Bioinforma. 2007; 24(3):404–11.
 30
Wei P, Pan W. Networkbased genomic discovery: application and comparison of Markov randomfield models. J Royal Stat Soc Ser C Appl Stat. 2010; 59(1):105–25.
 31
Wei P, Pan W. Bayesian joint modeling of multiple gene networks and diverse genomic data to identify target genes of a transcription factor. Annals Appl Stat. 2012; 6(1):334–55.
 32
Gatza ML, Lucas JE, Barry WT, Kim JW, Wang Q, Crawford MD, Datto MB, Kelley M, MatheyPrevot B, Potti A, et al. A pathwaybased classification of human breast cancer. Proc Natl Acad Sci. 2010; 107(15):6994–999.
 33
Nevins JR. Pathwaybased classification of lung cancer: a strategy to guide therapeutic selection. Proc Am Thoracic Soc. 2011; 8(2):180.
 34
Wen Z, Liu ZP, Liu Z, Zhang Y, Chen L. An integrated approach to identify causal network modules of complex diseases with application to colorectal cancer. J Am Med Inform Assoc. 2013; 20(4):659–67.
 35
Esfahani MS, Dougherty ER. Incorporation of biological pathway knowledge in the construction of priors for optimal Bayesian classification. IEEE/ACM Trans Comput Biol Bioinforma. 2014; 11(1):202–18.
 36
Esfahani MS, Dougherty ER. An optimizationbased framework for the transformation of incomplete biological knowledge into a probabilistic structure and its application to the utilization of gene/protein signaling pathways in discrete phenotype classification. IEEE/ACM Trans Comput Biol Bioinforma. 2015; 12(6):1304–1321.
 37
Boluki S, Esfahani MS, Qian X, Dougherty ER. Constructing pathwaybased priors within a Gaussian mixture model for Bayesian regression and classification. IEEE/ACM Trans Comput Biol Bioinforma. 2017. In press.
 38
Guiasu S, Shenitzer A. The principle of maximum entropy. Math Intell. 1985; 7(1):42–8.
 39
Hua J, Sima C, Cypert M, Gooden GC, Shack S, Alla L, Smith EA, Trent JM, Dougherty ER, Bittner ML. Tracking transcriptional activities with highcontent epifluorescent imaging. J Biomed Opt. 2012; 17(4):0460081–04600815.
 40
Dalton LA, Dougherty ER. Optimal classifiers with minimum expected error within a Bayesian framework–part I: Discrete and Gaussian models. Pattern Recog. 2013; 46(5):1301–1314.
 41
Dalton LA, Dougherty ER. Optimal classifiers with minimum expected error within a Bayesian framework–part II: Properties and performance analysis. Pattern Recog. 2013; 46(5):1288–1300.
 42
Dalton LA, Dougherty ER. Bayesian minimum meansquare error estimation for classification error–part I: Definition and the bayesian MMSE error estimator for discrete classification. Signal Process IEEE Trans. 2011; 59(1):115–29.
 43
MacKay DJC. Introduction to Monte Carlo methods In: Jordan MI, editor. Learning in Graphical Models. NATO Science Series. Dordrecht: Kluwer Academic Press: 1998. p. 175–204.
 44
Casella G, George EI. Explaining the Gibbs sampler. Am Stat. 1992; 46(3):167–74.
 45
Robert CP, Casella G. Monte Carlo Statistical Methods. New York: Springer; 2004.
 46
Zellner A. Maximal Data Information Prior Distributions, Basic Issues in Econometrics. Chicago: The University of Chicago Press; 1984.
 47
Ebrahimi N, Maasoumi E, Soofi ES. In: Slottje DJ, (ed).Measuring Informativeness of Data by Entropy and Variance. Heidelberg: PhysicaVerlag HD; 1999, pp. 61–77.
 48
Dougherty ER, Brun M, Trent JM, Bittner ML. Conditioningbased modeling of contextual genomic regulation. Comput Biol Bioinforma IEEE/ACM Trans. 2009; 6(2):310–20.
 49
Kauffman SA. Metabolic stability and epigenesis in randomly constructed genetic nets. J Theor Biol. 1969; 22(3):437–67.
 50
Shmulevich I, Dougherty ER, Kim S, Zhang W. Probabilistic Boolean networks: a rulebased uncertainty model for gene regulatory networks. Bioinforma. 2002; 18(2):261.
 51
Fauré A, Naldi A, Chaouiya C, Thieffry D. Dynamical analysis of a generic boolean model for the control of the mammalian cell cycle. Bioinformatics. 2006; 22(14):124.
 52
Weinberg R. The Biology of Cancer. New York: Garland science; 2013.
 53
Esfahani MS, Yoon BJ, Dougherty ER. Probabilistic reconstruction of the tumor progression process in gene regulatory networks in the presence of uncertainty. BMC Bioinformatics. 2011; 12(10):9.
 54
Layek RK, Datta A, Dougherty ER. From biological pathways to regulatory networks. Mol BioSyst. 2011; 7:843–51.
 55
Breiman L, Friedman J, Stone CJ, Olshen RA. Classification and Regression Trees. Boca Raton: Chapman & Hall/CRC; 1984.
 56
Breiman L. Random forests. Machine Learning. 2001; 45(1):5–32.
 57
Cortes C, Vapnik V. Supportvector networks. Machine Learning. 1995; 20(3):273–97.
 58
Kecman V. Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models. Cambridge: MIT Press; 2001.
 59
American Cancer Society. Cancer Facts and Figures 2017. Atlanta: American Cancer Society; 2017.
 60
Gao J, Aksoy BA, Dogrusoz U, Dresdner G, Gross B, Sumer SO, Sun Y, Jacobsen A, Sinha R, Larsson E, Cerami E, Sander C, Schultz N. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Science Signaling. 2013; 6(269):1–1.
 61
Cerami E, Gao J, Dogrusoz U, Gross BE, Sumer SO, Aksoy BA, Jacobsen A, Byrne CJ, Heuer ML, Larsson E, Antipin Y, Reva B, Goldberg AP, Sander C, Schultz N. The cBio cancer genomics portal: An open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012; 2(5):401–4.
 62
Lawrence MS, Stojanov P, Polak P, Kryukov GV, Cibulskis K, Sivachenko A, Carter SL, Stewart C, Mermel CH, Roberts SA, et al. Mutational heterogeneity in cancer and the search for new cancerassociated genes. Nature. 2013; 499(7457):214–8.
 63
West L, Vidwans SJ, Campbell NP, Shrager J, Simon GR, Bueno R, Dennis PA, Otterson GA, Salgia R. A novel classification of lung cancer into molecular subtypes. PLOS ONE. 2012; 7(2):1–11.
 64
Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000; 28(1):27–30.
 65
Kanehisa M, Sato Y, Kawashima M, Furumichi M, Tanabe M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 2016; 44(D1):457–62.
 66
LortetTieulent J, Soerjomataram I, Ferlay J, Rutherford M, Weiderpass E, Bray F. International trends in lung cancer incidence by histological subtype: Adenocarcinoma stabilizing in men but still increasing in women. Lung Cancer. 2014; 84(1):13–22.
 67
Waltz RA, Morales JL, Nocedal J, Orban D. An interior algorithm for nonlinear optimization that combines line search and trust region steps. Math Program. 2006; 107(3):391–408.
 68
Byrd RH, Hribar ME, Nocedal J. An interior point algorithm for largescale nonlinear programming. SIAM J Optim. 1999; 9(4):877–900.
Acknowledgements
Not applicable.
Funding
This work was funded in part by Award CCF1553281 from the National Science Foundation, and a DMREF grant from the National Science Foundation, award number 1534534. The publication cost of this article was funded by Award CCF1553281 from the National Science Foundation.
Availability of data and materials
The publicly available real datasets analyzed during the current study have been generated by the TCGA Research Network https://cancergenome.nih.gov/, and have been procured from http://www.cbioportal.org/.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 18 Supplement 14, 2017: Proceedings of the 14th Annual MCBIOS conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume18supplement14.
Authors’ contributions
SB developed mixturemodel modeling and extracting knowledge from pathways and regulating functions, performed the experiments, and wrote the first draft. MSE structured the prior knowledge by integrating his previous prior methods into this new framework. XQ in conjunction with ERD proposed the new general prior structure and proofread and edited the manuscript. ERD oversaw the project, in conjunction with XQ proposed the new general prior structure, wrote the OBC section, and proofread and edited the manuscript. All authors have read and approved final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Boluki, S., Esfahani, M.S., Qian, X. et al. Incorporating biological prior knowledge for Bayesian learning via maximal knowledgedriven information priors. BMC Bioinformatics 18, 552 (2017). https://doi.org/10.1186/s1285901718934
Published:
Keywords
 Optimal Bayesian classification
 Prior construction
 Biological pathways
 Probabilistic Boolean networks