 Methodology Article
 Open access
 Published:
A method combining a random forestbased technique with the modeling of linkage disequilibrium through latent variables, to run multilocus genomewide association studies
BMC Bioinformatics volume 19, Article number: 106 (2018)
Abstract
Background
Genomewide association studies (GWASs) have been widely used to discover the genetic basis of complex phenotypes. However, standard singleSNP GWASs suffer from lack of power. In particular, they do not directly account for linkage disequilibrium, that is the dependences between SNPs (Single Nucleotide Polymorphisms).
Results
We present the comparative study of two multilocus GWAS strategies, in the random forestbased framework. The first method, TTrees, was designed by Botta and collaborators (Botta et al., PLoS ONE 9(4):e93379, 2014). We designed the other method, which is an innovative hybrid method combining TTrees with the modeling of linkage disequilibrium. Linkage disequilibrium is modeled through a collection of treeshaped Bayesian networks with latent variables, following our former works (Mourad et al., BMC Bioinformatics 12(1):16, 2011). We compared the two methods, both on simulated and real data. For dominant and additive genetic models, in either of the conditions simulated, the hybrid approach always slightly performs better than TTrees. We assessed predictive powers through the standard ROC technique on 14 real datasets. For 10 of the 14 datasets analyzed, the already high predicted power observed for TTrees (0.9100.946) can still be increased by up to 0.030. We also assessed whether the distributions of SNPs’ scores obtained from TTrees and the hybrid approach differed. Finally, we thoroughly analyzed the intersections of top 100 SNPs output by any two or the three methods amongst TTrees, the hybrid approach, and the singleSNP method.
Conclusions
The sophistication of TTrees through finer linkage disequilibrium modeling is shown beneficial. The distributions of SNPs’ scores generated by TTrees and the hybrid approach are shown statistically different, which suggests complementary of the methods. In particular, for 12 of the 14 real datasets, the distribution tail of highest SNPs’ scores shows larger values for the hybrid approach. Thus are pinpointed more interesting SNPs than by TTrees, to be provided as a short list of prioritized SNPs, for a further analysis by biologists. Finally, among the 211 top 100 SNPs jointly detected by the singleSNP method, TTrees and the hybrid approach over the 14 datasets, we identified 72 and 38 SNPs respectively present in the top25s and top10s for each method.
Background
The etiology of genetic diseases may be elucidated by localizing genes conferring disease susceptibility and by subsequent biological characterization of these genes. Searching the genome for small DNA variations that occur more frequently in subjects with a peculiar disease (cases) than in unaffected individuals is the key to association studies. These DNA variations are observed at characterized locations  or loci  of the genome, also called genetic markers. Nowadays, genotyping technologies allow the description of case and control cohorts (a few thousand to ten thousand individuals) on the genome scale (hundred thousands to a few million of genetic markers such as Single Nucleotide Polymorphisms (SNPs)). The search for associations (i.e. statistical dependences) between one or several of the markers and the disease is called an association study. Genomewide association studies (GWASs) are also expected to help identify DNA variations that affect a subject’s response to drugs or influence interactions between genotype and environment in a way that may contribute to the onset of a given disease. Thus, improvement in the prediction of diseases, patient care and achievement of personalized medicine are three major aims of GWASs applied to biomedical research.
Exploiting the existence of statistical dependences between neighbor SNPs is the key to association studies [1, 2]. Statistical dependences within genetical data define linkage disequilibrium (LD). To perform GWASs, geneticists rely on a set of genetic markers, say SNPs, that cover the whole genome and are observed for any genotyped individual of a studied population. However, it is highly unlikely that a causal variant (i.e. a genetic factor) coincides with a SNP. Nevertheless, due to LD, a statistical dependence is expected between any SNP that flanks the unobserved genetic factor and the latter. On the other hand, by definition, a statistical dependence exists between the genetic factor responsible for the disease and this disease. Thus, a statistical dependence is also expected between the flanking SNP and the studied disease.
A standard singleSNP GWAS considers each SNP on its own and tests it for association with the disease. GWASs considering binary affected/unaffected phenotypes rely on standard contingency table tests (chisquare test, likelihood ratio test, Fisher’s exact test). Linear regression is broadly used for quantitative phenotypes.
The lack of statistical power is one of the limitations of singleSNP GWASs. Thus, multilocus strategies were designed to enhance the identification of a region on the genome where a genetical factor might be present. In the scope of this article, a “multilocus” strategy has to be distinguished from strategies aiming at epistasis detection. Epistatic interactions exist within a given set of SNPs when a dependence is observed between this combination of SNPs and the studied phenotype, whereas no marginal dependence may be evidenced between the phenotype and any SNP within this combination. Underlying epistasis is the concept of biological interactions between loci acting in concert as an organic group. In this article, a multilocus GWAS approach aims at focusing on interesting regions of the genome, through a more thorough exploitation of LD as in single SNPGWASs.
When inheriting genetic material from its parents, an individual is likely to receive entire short segments identical to its parents’  called haplotypes . Thus, as a manifestation of linkage disequilibrium  namely dependences of loci along the genome , in a short chromosome segment, only a few distinct haplotypes may be observed over an entire population (see Fig. 1). Chromosomes are mosaics where extent and conservation of mosaic pieces mostly depend on recombination and mutation rates, as well as natural selection. Thus, the human genome is highly structured into the socalled “haplotype block structure” [3].
The most basic approach in the field of multilocus strategies, haplotype testing, relies on contingency tables to study haplotype distributions in the case and cohort groups. The traditional haplotypebased tests used in casecontrol studies are goodnessoffit tests to detect a contrast between the case and control haplotype distributions [4]. Theoretical studies have shown that multiallelic haplotypebased approaches can provide superior power to discriminate between cases and controls, compared to singleSNP GWASs, in mapping disease loci [5]. Besides, the use of haplotypes in disease association studies achieves data dimension reduction as it decreases the number of tests to be carried out.
However, one limitation is that haplotype testing requires the inference of haplotypes  or phasing , a challenging computational task at genome scale [6, 7]. Another limitation is that when there are many haplotypes, there are many degrees of freedom and thus the power to detect association can be weak. Besides, the estimates for the rare haplotypes can be prone to errors as the null distribution may not follow a chisquare distribution. To cope with these issues, some works have considered haplotype similarity to group haplotypes into clusters. Thus, using a small number of haplotype clusters reduces the number of degrees of freedom and alleviates the inconvenience related to rare haplotypes. In this line, a variable length Markov chain model was designed by Browning and Browning to infer localized haplotype clustering and subsequently carry out an haplotypebased GWAS [8].
To accelerate haplotypebased GWASs, some authors rely on phase known references [9]. Allele prediction is achieved using a reference population with available haplotype information. To boost haplotype inference, Wan and coauthors only estimate haplotypes in relevant regions [10]. For this purpose, a slidingwindow strategy partitions the whole genome into overlapping short windows. The relevance of each such window is analyzed through a twolocus haplotypebased test. Hardware accelerators are also used in the works reported in [11], to speed up the broadly used PHASE haplotype inference method [12].
The formidable challenge of GWASs demands algorithms that are able to cope with the size and complexity of genetical data. Machine learning approaches have been shown to be promising complements to standard singleSNP and multilocus GWASs [13, 14]. Machine learning techniques applied to GWASs encompass but are not limited to penalized regression (e.g. LASSO [15], ridge regression [16]), support vector machines [17], ensemble methods (e.g. random forests), artificial neural networks [18] and Bayesian networkbased analyses [19, 20]. In particular, random forestbased methods were shown very attractive in the context of genetical association studies [21]. Random forest classification models can provide information on importance of variables for classification, in our case for classification between affected and unaffected subjects.
In this paper, we compare a variant of the random forest technique specifically designed for GWASs, TTrees, and a novel approach combining TTrees with the modeling of linkage disequilibrium through latent variables. The modeling relies on a probalistic graphical framework, using the FLTM (Forest of latent tree models) model. The purpose of the present work is to examine how the already high performances of TTrees are affected when combining TTrees with a more refined modeling of linkage disequilibrium than through blocks of contiguous SNPs as is done in TTrees. In our innovative proposal, linkage disequilibrium is modeled into a collection of treeshaped Bayesian networks each rooted in a latent variable. In this framework, these latent variables roughly play the role of haplotypes. In the remainder of this paper, we focus on binary phenotypes (i.e. affected/unaffected status).
The random forest technique settles the grounds of an ensemble method relying on the decision tree concept. In machine learning, a decision tree is a model used for classification purpose. However, building a decision tree often entails model overfitting, with detrimental consequences on the subsequent use of this model. Breiman thus introduced the random forest concept, to design an ensemble method to subsequently average prediction over a set of decision trees [22]: a random forest is thus a collection of decision trees built from variables that best determine between two classes. In the GWAS field, the two classes correspond to affected and unaffected statuses, and the variables involved in the trees are good candidate to explain the disease. Random forests have proven useful to analyze GWAS data [23].
However, the necessity to handle highdimensional data has led to the proposal of variants. In [24], a twostage procedure only allows prefiltered SNPs as explanatory variables in the forest’s trees. Filtering separates informative and irrelevant SNPs in two groups, based on their pvalues. In [25], the entire genome is randomly divided into subsets. A random forest is fit for each subset, to compute subranks for the SNPs. The definite ranks of the SNPs are defined based on these subranks and are then iteratively improved.
Among the GWAS strategies focused on random forests, the works of Botta and collaborators are specific in that they attempt to acknowledge linkage disequilibrium [26]. These works have resulted in the TTrees model, an embedded model where the nodes in the trees of a random forest are themselves trees. From now on, we will refer to metatrees having metanodes, together with embedded trees and nodes. Basic biological information is integrated in these internal trees, for which the variables (SNPs) to be chosen are selected from adjacent windows of same width covering the whole genome. However, a more refined multilocus approach can be designed, that drops the principle of windows, to better model linkage disequilibrium. Our proposal is to combine the TTrees approach with another machine learning model, able to infer a map of SNP clusters. Such clusters of SNPs are meant to extend the notion of haplotype blocks to genotype clusters.
Many efforts have been devoted to model linkage disequilibrium. To achieve this aim at the genome scale, machine learning techniques involving probabilistic graphical models have been proposed in particular (see [27] and [28] for surveys). In this line, decomposable Markov random fields have been investigated through the works on interval graph sampling and junction tree sampling of Thomas and coworkers ([29] and [30], respectively), those of Verzilli and coworkers [20] and Edwards’ works [31]. Investigations focused on Bayesian networks with latent variables have resulted in two models: the hidden Markov model of Scheet and Stephens [12] underlying the PHASE method on the one hand, and the forest of latent tree models (FLTM) developed by Mourad and coworkers [32], on the other hand.
The aim of this methodological paper is to compare the original TTrees method proposed by Botta and collaborators to the same method augmented with more refined biological knowledge. The blocks of SNPs are replaced with clusters of SNPs resulting from the modeling of linkage disequilibrium in the first layer of the FLTM model of Mourad and coworkers. This study is necessary to assess whether the TTrees approach with LD integration provides similar or complementary results with respect to the original TTrees strategy. In addition, these two multilocus strategies are compared to a standard singleSNP GWAS. The comparison is performed on fourteen real GWAS datasets made available by the WTCCC (Wellcome Trust Case Control Consortium) organization (https://www.wtccc.org.uk/).
Methods
The first subsection provides a gentle introduction to the standard random forest framework. The objective is to pave the way for further explaining the workings of the more advanced TTrees and hybrid FLTM / TTrees methods. The second subsection presents TTrees in a progressive way. It leads the reader through the two embedded levels (and according learning algorithms) of the TTrees model. The FLTM model is presented in the third subsection, together with a sketch of its learning algorithm. The fourth subsection depicts the hybrid FLTM / TTrees approach. Strong didactical concerns have motivated the unified presentation of all learning algorithms, to allow full understanding for both nonspecialists and specialists. A final subsection focuses on the design of the comparative study reported in this paper.
A random forest framework to run genomewide association studies
Growing a decision tree is a supervized task involving a learning set. It is a recursive process where tree node creation is governed by cutpoint identification. A cutpoint is a pair involving one of the available variables, v^{∗}, and a threshold value θ. Over all available variables, this cutpoint best discriminates the observations of the current learning set with respect to the categories c_{1} and c_{2} of some binary categorical variable of interest c (the affected/unaffectetd status in GWASs). At the tree root, the first cutpoint allows to split the initial learning set into two complementary subsets, respectively satisfying v^{∗}≤θ and v^{∗}>θ, for some identified pair (\(\phantom {\dot {i}\!}v^{*}, \theta \)). If the discrimination power of cutpoint (v^{∗},θ) is high enough, one should encounter a majority of observations belonging to category c_{1} and category c_{2} (or symmetrically), for both subsets respectively. However, at some node, if all observations in the current learning set belong to the same category, the node needs no further splitting and recursion locally ends in this leaf. Otherwise, recursion will be continued, on both novel learning subsets resulting from splitting. Thus will be provided two subtrees, to be grafted to the current node under creation.
The generic scheme of the standard learning algorithm for decision trees is provided in Algorithm 1. Its ingredients are: a test to terminate recursion (line 1), recursion termination (line 2), and recursion preceded by cutpoint identification (lines 4 to 7). We will rely on this reference scheme to highlight the differences with variants further considered in this paper. Recursion termination is common to this learning algorithm and the aforementioned variants. Algorithm 2 shows the instantiation of the former general scheme, in the case of standard decision tree growing. The conditions for recursion termination are briefly described in Algorithm 2 (see caption).
In the learning algorithm of a decision tree, exact optimization is performed (Algorithm 2, line 6 to 9): for each variable v in V and for each of the i_{ v } values in its value domain \(Dom(v) = \{ \theta _{v1}, \theta _{v2}, \cdots \theta _{v{i_{v}}} \}\phantom {\dot {i}\!}\), the discrimination power of cutpoint (v, θ_{ vi }) is evaluated. If the cutpoint splits the current learning set D into D_{ ℓ } and D_{ r }, the quality of this candidate cutpoint is commonly assessed based on the conditional entropy measure : discriminatingScore(cut−point, D, c)=H(D/c)−w_{ ℓ }×H(D_{ ℓ }/c)−w_{ r }×H(D_{ r }/c), where H(X/Y) denotes the conditional entropy (\(H(X/Y) = \sum _{{x\in Dom(X),y\in Dom(Y)}}p(x,y)\log {\frac {p(x)}{p(x,y)}}\)), c is the binary categorical variable, and w_{ ℓ } and w_{ r } denote relative sample set sizes. Thus, an optimal cutpoint is provided for each variable v in V, through the maximization of discriminatingScore (Algorithm 2, line 7). Finally, the best optimal predicate over all variables in V is identified (Algorithm 2, line 9).
Single decision trees are subject to several limitations, and in particular a (very) high variance which makes them often suboptimal predictors in practice. A technique called bagging was proposed by Breiman to bring robustness in machine learning algorithms with regard to this aspect ([33]). Bagging conjugates bootstrapping and aggregating. The reader is reminded that bootstrapping is a resampling technique consisting in sampling with replacement from the original sample set. Bootstrapping allows to generate an ensemble of predictors learned from slightly different versions of the original learning set. Thus, in a prediction framework, robustness is brought through the aggregation of predictions across the predictors in the ensemble. Bagging was one of the motivations to design the ensemble technique yielding a random forest.
On the other hand, other searchers investigated the idea of building treebased models through a stochastic treegrowing algorithm instead of a deterministic one, as in decision trees. The idea of combining bagging with randomization led to the random forest model [22]. In the random forest model consisting of T trees, two kinds of randomization are introduced [34, 35]: (i) global, through the generation of T bootstrap copies; (ii) local, at the node level, for which the computation of the optimal cutpoint is no more performed exactly, namely over all variables in V, but instead over K variables selected at random in V. The second randomization source both aims at decreasing complexity for large datasets, and diminishing the variance.
Two of the three methods compared in the present study, TTrees and the hybrid FLTM/TTrees approach, are variants of random forests. For further reference, Algorithm 3 outlines the simple generic sketch that governs the growing of an ensemble of treebased models, in the random forest context. It has to be noted that a novel set of K variables is sampled at each node, to compute the cutpoint at this node. It follows that the instantiations of generic Algorithm 1 (growTree) into growDecisionTree (Algorithm 2), and growRFTree adapted to the random forest framework (Appendix, Algorithm 7), only differ in the cutpoint identifications. Table 1(A) and 1(B) show the difference between growDecisionTree and growRFTree. For the report, the full learning procedure growRFTree is depicted in Algorithm 7 in Appendix.
For a gradual introduction to the hybrid FLTM / TTrees approach, we will refer to various algorithms in the remaining of the paper. The relationships between these algorithms are described in Fig. 2.
The TTrees approach
The novelty in the TTrees approach is that it treats more than one variable at each of the nodes, in the context of association studies [36]. In the GWAS context, the reason to modify the splitting process lies in the presence of dependences within the SNPs (i.e. within the variables in V), called linkage disequilibrium. This peculiar structure of the data entails an expectation of limited haplotype diversity, locally on the genome. Based on the physical order of the SNPs along the genome, the principle of TTrees approach is to partition the set of variables V into blocks of B contiguous and (potentially highly) correlated variables. Each split will then be made on a block of SNPs instead of a single SNP, taking advantage of the local information potentially carried by the region covered by the corresponding block. However, addressing node splitting based on several variables was quite a challenge. For this purpose, Botta and collaborators customized a random forest model where each node in any tree embeds itself a tree. This “trees inside trees” model is abbreviated in TTrees. Figure 3 describes the structure of a TTrees model. Basically, the splitting process used in any node (or rather metanode) of the random forest is now modified as follows: it involves a block of B variables, selected from K candidate blocks, instead of a single variable selected from K candidate variables as in random forests. In the case of GWASs, each block consists of B consecutive SNPs. For each metanode, an embedded tree is then learned from a subset of k variables selected at random from the former block of B variables. Thus, it has to be noted that an additional source of randomization is brought to the overall learning algorithm: k plays in embedded tree learning the same role as the aforementioned parameter K plays in learning the trees at the random forest level. Only, to lower the complexity, k is much smaller than K (e.g. K is in the order of magnitude 10^{3}, k is less than few tens). Above all, overall TTrees learning tractability is achieved through the embedding of trees that are weak learners. Aggregating multiple weak learners is often the key to ensemble strategies’ efficiency and tractability [37]. The weak embedded learner used by Botta and coworkers is inspired from the one used in the ensemble Extremely randomized tree framework proposed by Geurts and coworkers [38]. Following these authors, the abbreviation for Extremely randomized tree is Extratree.
In the Extratree framework, a key to diminishing the variance is the combination of explicit randomization of cutpoints with ensemble aggregation. Just as importantly, explicit randomization of cutpoints also intends to diminish the learning complexity for the whole ensemble model, as compared to the standard random forest model. We now focus of the basic brick, the (single) Extratree model, when embedded in the TTrees context. The Extratree model drops the idea of identifying an optimal cutpoint for each of the k variables selected at random among the B variables in a block. Instead, this method generates the k candidate cutpoints at random and then identifies the best one. Table 1(C) highlights the differences with the cutpoint identifications in growDecisionTree and growRFTree (Table 1(A) and 1(B)). However, embedding trees presents a challenge for the identification of the cutpoint at a metanode (for each metanode of the random forest, in the TTrees context). So far, we know that, at a metanode n with current learning set D_{ n }, the solution developed in the TTrees framework selects at random K blocks \(\mathcal {B}_{1} \cdots \mathcal {B}_{K}\) of B variables each, and accordingly learns K Extratrees ET_{1}⋯ET_{ K }. In turn, each Extratree ET_{ b } is learned based on k variables selected from block \(\mathcal {B}_{b}\). Now the challenge consists in being able to split the current learning set D_{ n }, based on some cutpoint involving a metavariable to be inferred. This novel numerical feature has to reflect the variables exhibited in Extratree ET_{ b }. Botta and coworkers define this novel numerical feature ν as follows: for Extratree ET_{ b }, the whole current learning set D_{ n } (of observations) has been distributed into ET_{ b }’s leaves; each leaf is then labeled with the probability to belong to, say, category c_{1} (e.g. 0.3); therefore, for each observation o in D_{ n } reaching leaf \(\mathcal {L}\), this metavariable is assigned \(\mathcal {L}\)’s label (e.g. ν(o)=0.3); consequently, the domain of the metavariable can be defined (Dom(ν)={ν(o), o∈observations(D_{ n })}); finally, it is straightforward to identify a threshold θ_{ b } that optimally discriminates D_{ n } over the domain value of the metavariable. The previous process described to identify the threshold θ_{ b } for a metavariable plays the role of function OptimalCutPoint in the generic scheme of random forest learning (line 8 of Algorithm 7, Appendix). We wish to emphasize here that the vast performance assessment study of the TTrees method conducted by Botta [36] evidenced high predictive powers (i.e. AUCs over 0.9  The concept of AUC will be further recalled in Section Methods / Study design / Road map). Since the TTrees method was empirically shown efficient, the explanation for such high performances lies in the core principles underlying TTrees design: (i) transformation of the original input space into blocks of variables corresponding to contiguous SNPs potentially highly correlated, due to linkage disequilibrium and (ii) replacement of the classical univariate linear splitting process by a multivariate nonlinear splitting scheme of several variables belonging to a same block.
The FLTM approach
In contrast with the “ensemble method” meaning of “forest” in the two previous subsections, the Forest of Latent Tree Models (FLTM) we now focus on is a treeshaped Bayesian network with discrete observed and latent variables.
A Bayesian network is a graphical model that encodes probabilistic relationships among n variables, each described for p observations. The nodes of the Bayesian network represent the variables, and the directed edges in the graph represent direct dependences between variables. A probability distribution over the p observations is associated to each node. If the node corresponding to variable v has parents Pa_{ v }, this distribution is conditional (P(v/Pa_{ v })). Otherwise, this distribution is marginal (P(v)). The collection of probability distributions over all nodes is called the parameters.
The FLTM model was designed by Mourad and collaborators for the purpose of modeling linkage disequilibrium (LD) at the genome scale. Indeed, the frontiers between regions of LD are fuzzy and a hierarchical model allows to account for such fuzziness. LD is learned from an n×p matrix (i.e. n SNPs ×p individuals). FLTMbased LD modeling consists in building a specific kind of Bayesian network with the n observed variables as tree leaves and latent variables as internal nodes in the trees. The structure of an FLTM model is depicted in Fig. 4.
Learning a latent tree is challenging in the high dimensional case. There exist \(O\left (2^{3n^{2}}\right)\) candidate structures for a latent tree derived from n observed variables [39]. Learning the tree structure can only be efficiently addressed through iterative ascending clustering of the variables [40]. A similarity measure based on mutual information is usually used to cluster discrete variables. On the other hand, parameter learning requires timeconsuming procedures such as the ExpectationMaximization (EM) algorithm in the case of missing data. Dealing with latent variables represents a subcase of the missing data case. The FLTM learning algorithm is sketched and commented in Fig. 5.
To allow a faithful representation of linkage disequilibrium, a great flexibility of FLTM modeling was an objective of Mourad and collaborators’ works: (i) No fixed cluster size is required; (ii) The SNPs allowed in the same cluster are not necessarily contiguous on the genome, which allows long range disequilibrium modeling (iii) In the FLTM model, no two latent variables are constrained to share some userspecified cardinality. The reason of the FLTM learning algorithm tractability is fourfold: (i) Variables are allowed in the same cluster provided that there are located within a specified physical distance on the genome. Handling a sparse similarity matrix is affordable whereas using a pairwise matrix would not; (ii) Local learning of latent class model (LCM) has a complexity linear in the number of LCM’s child nodes; (iii) A heuristics in constant time provides the cardinality required by EM for the latent variable of each LCM; (iv) There are at most 3 n latent variables in a latent tree built from n observed variables.
The hybrid FLTM / TTrees approach
Now the ingredients to depict the hybrid approach developed in this paper are in place. In TTrees, the blocks of B contiguous SNPs are a rough approximation of linkage disequilibrium. In contrast, each latent variable in layer 1 of the FLTM model pinpoints a region of LD. The connection between the FLTM and TTrees models is achieved through LD mapping. The block map required by TTrees in the original proposal is replaced with the cluster map associated with the latent variables in layer 1. It has to be emphasized that this map consisting of clusters of SNPs is not the output of a mere clustering process: in Fig. 5e, a latent variable and thus its corresponding cluster are validated following a procedure involving EM learning for Bayesian network parameters.
The hybrid approach is fully depicted and commented in Algorithms 4, 5 and 6. Hereinafter, we provide a broad brush description. In Algorithm 4, the generic random forest scheme of Algorithm 3 achieving global randomization is enriched with the generation of the LD map through FLTM modeling (lines 1 and 2). This map is one of the parameters of the function growMetaTree (Algorithm 4, line 6). The other parameters of growMetaTree will respectively contribute to shape the metatrees in the random forest (S_{ n },S_{ t }, K) and the embedded trees (s_{ n },s_{ t }, k) associated to the metanodes. Both parameters K and k achieve local randomization. In addition, function growMetaTree differs from growRFTree (Appendix, Algorithm 7) in two points: it must expand an embedded tree through function growExtraTree (Algorithm 5, line 8) for each of K clusters drawn from the LD map; it must then infer data for the metavariable defined by each of the K Extratrees, to compute the optimal cutpoint for each such metavariable (optimalCutPointTTrees, Algorithm 5, line 9). Algorithm 6 fully details function growExtraTree, in which identification of cutpoints achieves a further step of randomization (line 8).
In a random forestbased approach, the notion of variable importance used for decision trees is modified to include in Nodes(v) the set of all nodes, over all T trees, where v is used to split. As such, this measure is however dependent on the number T of trees in the forest. Normalization is used to divide the previous measure (over the T trees) by the sum of importances over all variables. Alternatively, dividing by the maximum importance over all variables may be used.
In the GWAS context, the differences between standard singleSNP GWAS, the TTrees approach and the hybrid FLTM / TTrees approach are schematized in Fig. 6.
Study design
In this last subsection, we first present the data used in our comparative analysis. Then, details are provided regarding software implementation, including considerations about the validation of the software parallelization. We next describe the parameter setting for the methods involved in the comparative study. Finally, we provide the road map of our methodological analysis.
Simulated data
To simulate realistic genotypic data and an association between one of these SNPs and the disease status, we relied on one of the most widelyused software programs, HAPGEN (http://mathgen.stats.ox.ac.uk/genetics_software/hapgen/hapgen2.html) [41]. To control the effect size of the causal SNPs, three ingredients were combined: severity of the disease expressed as genotype relative risks (GRRs) for various genetic models (GMs), minor allele frequency (MAF) of the causal SNP. The genetic model was specified among additive, dominant or recessive. Three genotype relative risks were considered (1.2, 1.5 or 1.8). The range of the MAF at the causal SNP was specified within one of the three intervals [0.050.15], [0.150.25] or [0.250.35]. The disease prevalence (percentage of cases observed in a population) specified to HAPGEN was set to 0.01. These choices are justified as standards used for simulations in association genetics.
HAPGEN was run on a reference haplotype set of the HapMap phase II coming from U.S. residents of northern and western European ancestry (CEU). Datasets of 20000 SNPs were generated for 2000 cases and 2000 controls. Each condition (GM, GRR, MAF) was replicated 30 times. For each replicate, we simulated 10 causal SNPs. Standard quality control for genotypic data was carried out: we removed SNPs with MAF less than 0.05 and SNPs deviant from HardyWeinberg Equilibrium with a pvalue below 0.001.
Real data
The GWAS data we used was made available by the WTCCC (Wellcome Trust Case Control Consortium) organization (https://www.wtccc.org.uk/). The WTCCC provides GWAS data for seven pathologies: bipolar disorder (BD), coronary artery disease (CAD), Crohn’s disease (CD), hypertension (HT), rheumatoid arthritis (RA), Type 1 diabetes (T1D) and Type 2 diabetes (T2D). The data from the two cohort controls provided by the WTCCC was also included. For each pathology, we carried out a comparative study on two datasets corresponding to two chromosomes.
The NHGRIEBI Catalog of published genomewide association studies (https://www.ebi.ac.uk/gwas/) allowed us to select these two chromosomes: for each pathology, we retained the chromosomes respectively showing the highest and lowest numbers of published associated SNPs so far. Table 2 recapitulates the description of the 14 WTCCC datasets selected. A quality control phase based on specifications provided by the WTCCC Consortium was performed [42]. In particular, SNPs were dismissed based on three rules: missing data percentage greater than 5%, missing data percentage greater than 1% together with frequency of minor allele (MAF) less than 5%; pvalue for exact HardyWeinberg equilibrium test less than 5.7×10^{−7}; pvalue threshold for trend test (1 ddl) equal to 5.7×10^{−7} and pvalue threshold for general test (2 ddl) equal to 5.7×10^{−7}.
Implementation
The TTrees (sequential) software written in C++ was provided by Botta. A single run is highly timeconsuming for GWASs in the orders of magnitude we have to deal with. For example, on a processor INTEL Xeon 3.3 GHz, running TTrees on chromosome 1 for Crohn’s disease requires around 3 days. In these conditions, a 10fold crossvalidation (to be further described) would roughly require a month. On the other hand, around 5 GB are necessary to run TTrees with the parameter values recommended by Botta [36], which restrains the number of executions in parallel. The only lever of action left was to speed up TTrees software through parallelization. We parallelized Botta’s code using the OpenMP application programming interface for parallel programming (http://www.openmp.org/).
For the purpose of this study we also developed the third version of our FLTM learning algorithm [43]. It was written in C++ and relies on the ProBT library dedicated to Bayesian networks (http://www.probayes.com/fr/, http://www.probayes.com/fr/recherche/probt/, [44, 45]). In this version, the user is proposed a choice of three clustering methods. The one used in the present work is DBSCAN (DensityBased Spatial Clustering of Applications with Noise) [46]. In clustering methods belonging to the densitybased category, a cluster is defined as a dense component able to grow in any direction density leads to. In this category, DBSCAN was chosen as it meets two essential criteria: nonspecification of the number of clusters and ability to scale well. The theoretical runtime complexity of DBSCAN is O(n^{2}), where n denotes the number of items to be grouped into clusters. Nonetheless, the empirical complexity is known to be lower. DBSCAN requires two parameters: R, the maximum radius of the neighborhood to be considered to grow a cluster, and N_{ min }, the minimum number of neighbors required within a cluster. Details about the DBSCAN algorithm are available in [47] (page 526, http://wwwusers.cs.umn.edu/~kumar/dmbook/ch8.pdf).
Finally, we wrote scripts (in Python) to automatize the comparison of the results provided by the three GWAS strategies: singleSNP GWAS, TTrees, and hybrid FLTM/ TTrees approach.
Validation of the software parallelization
Parallelization is known to potentially entail biases in results. We therefore took great care to check whether this was the case for the parallelization of TTrees software. We recall that TTrees is stochastic, with four sources of randomization. Thus, we controled randomization through fixing random grains and we compared the sequential and parallelized versions of TTrees on five datasets. The comparison shows that importance measures are practically indentical, with some rare minor discrepancies (results not shown). In any case, SNP ranking is conserved from sequential to parallelized version.
Parameter setting
TTrees and the hybrid FLTM / TTrees approach share 7 parameters (T, S_{ n },S_{ t }, K, s_{ n },s_{ t }, k). We tuned them after the experiments and remarks reported in [36]. In TTrees, the size of the blocks of SNPs was set to 20. The FLTM learning algorithm requires 6 parameters (\(\phantom {\dot {i}\!}\alpha, \beta, card_{max}, \tau, nbEMrestarts, \delta \)). All 13 parameters are depicted in Table 3. The values of α,β,τ and \(nbEMrestarts\phantom {\dot {i}\!}\) were set after the indications in [32]. The value of card_{ max } was set according to our own experimental feedback. The value for δ was chosen to control the running time in the case of high dimensional data. In addition, the clustering procedure DBSCAN plugged into FLTM learning requires 2 parameters (R, N_{ min }). To avoid questionable empirical setting of the R parameter, for each of the 14 datasets analyzed, we ran the FLTM learning algorithm for a wide range of possible values of R. For each dataset, we retained the FLTM model with the R parameter that optimized an entropybased criterion. Table 3 recapitulates these 15 parameters and indicates the parameter setting or the parameter interval chosen for the present study.
Road map
Our methodological analysis consisted of four main tasks.
Comparison of the performances of TTrees and the hybrid approach on simulated data
We computed the percentage of the causal SNPs found among the top results, over all replicates related to the same condition (GM, GRR, MAF). We successively computed this percentage for the top 25, top 50, top 100, top 200 and top 1000 results.
Comparison of the predictive powers of TTrees and the hybrid approach on real data
ROC (Receiver operator characteristic) curves help quantify and compare the powers of classifiers [48]. In the context of disease prediction, positives (P) and negatives (N) respectively stand for affected and unaffected. The ROC curve involves four proportions: TPs (true positives), FNs (false negatives), TNs (true negatives), and FPs (false positives). The ROC curve plots sensitivity (TP/P=TP/(TP+FN)) against 1 − specificity (TN/N=TN/(TN+FP)). Assuming positive statuses rank higher than negative statuses, the area under the curve (AUC) is equal to the probability that the prediction tool under analysis will rank a randomly chosen affected subject higher than a randomly chosen unaffected one. A high AUC is expected for a good classifier. We compared the AUCs obtained for TTrees and the hybrid FLTM / TTrees approach on 14 datasets, following a 10fold cross validation scheme. In this scheme, the initial dataset D is split into 10 smaller datasets of equal size {D_{ i }, 1≤i≤10}. The model is trained on nine tenths of the data (training set) and tested on the remaining tenth (testing set). This process is iterated ten times. The i^{th} iteration involves testing set D_{ i }. It is important to note that in the hybrid FLTM / TTrees approach, the LD map provided by FLTM is the same for all 10 iterations.
Comparison of the distributions of variable importances obtained from TTrees and the hybrid approach on real data
A third task consisted in comparing the distributions of variable importances for the two random forestbased methods. Besides descriptive statistics, a peer analysis relying on Wilcoxon rank sum test and Pearson correlation was performed.
Analysis of the SNP sets jointly identified by any two among the three, or the three methods compared on real data
In contrast to singleSNP GWASs, random forestbased GWASs entail heavy computational burden. Therefore, it is not affordable to assess the statistical significance of the importance measure based on an empirical distribution H_{0}, to provide a pvalue. Dealing with pvalues on the one hand, and importance values on the other hand, our comparative study focused on the 100 top ranked SNPs, for each of the singleSNP, TTrees, and hybrid FLTM / TTrees approaches.
Results and discussion
This section presents and discusses the results obtained following the road map abovedepicted.
Comparison of the performances of TTrees and the hybrid approach on simulated data
Figure 7 allows to compare the performances of TTrees and the hybrid approach under each of the 27 conditions simulated. For the additive and dominant genetic models, we find that the hybrid approach almost always outperforms TTrees when small sets of top results are examined. Then, sooner or later, the discrepancy between the methods diminishes. For a first illustration, we examine condition (GM: add, GRR: 1.5, MAF: 0.250.35). Regarding the top 25 set, the hybrid approach slightly outperforms TTrees, with a percentage of simulated causal SNPs retrieved equal to 71.7% (versus 70.8% for TTrees). The discrepancy is higher for the top 50 set, for which the hybrid method is able to retrieve 89.4% of the causal SNPs, in contrast to the relatively low percentage of 79.3% for TTrees. As from the top 100 set, both methods show quasi similar performances. Under this condition, the top 100 set contains around 94% of the causal SNPs. For a second illustration, we now focus on condition (GM: add, GRR: 1.5, MAF: 0.150.25). The percentage of simulated causal SNPs retrieved in the top 25 set is 62.2% for the hybrid method. This percentage is only 52.7% for TTrees. Both methods present a top 1000 percentage around 90% (respectively 91.0% and 90.2% for the hybrid method and TTrees). Under this condition, a difference between the two methods exists up to the top 100 set (90.3% versus 83.7%).
For the additive and dominant models, we observe a more or less smooth degradation of the performances as the MAF and GRR decrease. However, the hardest case (GRR: 1.2, MAF: 0.050.15) is an exception, for which even the top 100 to top 1000 percentages are low (below 30%). In constrast, as expected, in all conditions, the performances are poor for the recessive model.
Regarding the recessive model, the trend is mitigated, with TTrees performing better than the hybrid approach or vice versa. Over the 9 (GRR, MAF) conditions, there are only two cases for which the hybrid approach slightly outperforms TTrees. In 2 of the most 3 difficult conditions, (GRR: 1.2, MAF: 0.150.25) and (GRR: 1.5, MAF: 0.050.15), the hybrid approach outperforms TTrees up to top 100 set included (with the exception of top 25 for the first condition previously mentioned). Table 4 recapitulates the main trends observed.
To conclude, these first experiments show that combining TTrees with the modeling of linkage disequilibrium improves the original TTrees method.
Comparison of the predictive powers of TTrees and the hybrid approach on real data
The parallelization of TTrees allowed to decrease the running time from around 3 days to around 3 hours. Thus, the 10fold crossvalidation time was reduced from around one month to less than 40 hours. The AUCs for TTrees and the hybrid FLTM / TTrees approach are displayed in Table 5. The first conclusion to draw is that both methods perform similarly, with, on average, a slight improvement of the hybrid method over TTrees in 10 cases over 14. Globally, the absolute difference in AUCs between the two methods ranges from 0.6 to 3.0%. For the 10 situations showing improvement, the average improvement amounts to 1.74% and the standard deviation is 0.85%. It was not a foregone result that we could improve the performance of TTrees as it was already high. We conclude that the sophistication of TTrees through finer linkage disequilibrium modeling is shown beneficial in the majority for the 14 datasets analyzed.
Comparison of the distributions of variable importances obtained from TTrees and the hybrid approach on real data
For a complementary analysis, we compared the distributions of variable importances across the SNPs, obtained from TTrees and the hybrid FLTM / TTrees approach. Additional file 1 in Supplementary data provides the results of this thorough analysis. Table 1 in Additional file 1 displays the minimum, maximum, average and standard deviation values obtained for both TTrees and hybrid approaches. The extract shown in Table 6 is representative of the trend observed over the 14 datasets. For TTrees, importance measures vary from nondetectable to orders of magnitude of 10^{−2} for 7 datasets and 10^{−1} for the 7 other datasets. As regards the hybrid approach, importance measures vary from nondetectable to orders of magnitude 10^{−2} and 10^{−1} respectively for 6 and 8 datasets. Except for one dataset, (CD, Chr20), the maximal importance values are in the same order of magnitude for both methods. Besides, for TTrees and the hybrid approach, averages across SNPs are always in the same order of magnitude (10^{−5}, exceptionally 10^{−4} for 3 datasets ((BD, Chr21), (RA, Chr19) and (T2D, Chr21)). The orders of magnitude for the standard deviations are in the range [ 10^{−4}−10^{−3}]. Again, the trend observed is similarity in the orders of magnitude for TTrees and the hybrid approach, except for 3 datasets ((BD, Chr03), (BD, Chr21) and (CD, Chr20)).
However, complementary statistics shed more light on the distributions compared. From Table 2 in Additional file 1 (Supplementary data), we observe that the correlation coefficient varies from 0.13 to 0.98. Moreover, the Wilcoxon test rejects the null hypothesis of similarity for the two distributions, in all 14 cases. An extract of this table is shown in Table 7. Since, according to the Wilcoxon test, the mean ranks between the distributions differ, the conclusion to draw is that, in particular, the top SNPs are likely to differ between the two methods compared.
Tables 3 to 16 in Additional file 1 (Supplementary data) help decipher the origin of the discrepancies between the TTrees and hybrid approaches’ distributions. For each dataset (pathology, chromosome), the corresponding table compares the 25%, 50%, 75% and 100% quantiles obtained for TTrees and the hybrid approach. Importantly, a focus is also set on the 6 quantiles that correspond to the top 300, top 200, top 100, top 50, top 20 and top 10 SNPs output by each method. A first observation is that up to the 50% quantile, for both methods, the importance measures vary in the range of from nondetectable (i.e. below 10^{−8}) to the order of magnitude 10^{−5} at most (exceptionally 10^{−4} at most for TTrees and datasets (BD, Chr21) and (RA, Chr19)). We also observe a constant trend: up to 75% quantile included, the hybrid’s method quantiles are always lower than TTree’s quantiles. In contrast, for 12 datasets out of 14, the hybrid’s method quantiles are higher than TTrees’ quantiles as from some threshold quantile. To fix ideas, we show in Table 8 the quantiles relative to dataset (RA, Chr19). In this case, more “important” SNPs are likely to be found in the hybrid’s top 300 SNPs than in the TTrees’ top 300s (quantile_{ hybrid }=1.5×10^{−2}versus quantile_{T−Trees}=6.0×10^{−3}).
As biologists expect a short list of prioritized SNPs to be further analyzed, it is important to study whether one of the two methods yields higher importance values than the other for the top SNPs. Table 9 subsumes the tendencies observed from Tables 3 to 16 in Additional file 1 (Supplementary data).
Hintertho we knew that the two distributions of the variable importances differed between TTrees and the hybrid approach. The conclusion to draw now is that, except for 2 datasets out of 14, the hybrid method outputs top ranked SNPs with relatively higher importances than in TTrees, which is our focus in GWASs. Therefore, the search space seems to be more efficiently explored. Potentially, the hybrid approach allows us to group the SNPs in clusters in a manner more efficient to target the putative associated SNPs. In TTrees, artificial blocks of 20 contiguous SNPs attempt to approximate linkage disequilibrium. In the hybrid FLTM / TTrees approach, clustering based on linkage disequilibrium produces singletons and clusters of pairwise highly correlated SNPs. First, it has to be emphasized that regarding the draws of SNPs contributing to Extratrees, TTrees and the hybrid approach are put on an equal footing: k was set to either the block’s size (TTrees) or was dynamically adapted to any cluster’s size (hybrid approach). Thus, in this GWAS framework, any SNP in a block (respectively cluster) will be considered to contribute to the Extratree built for this block (respectively cluster). However, in TTrees, the artificial delimitation into blocks may lessen the chance of testing SNPs from the same real haplotype block within the Extratree scope. It is likely that the multivariate split produced from a cluster of SNPs tagging the putative influential SNP, or best capturing association with the phenotype, will be the optimal split (Algorithm 5, line 11), for any such cluster draw (at any metanode of the T metatrees). In TTrees, a block is also likely to be drawn at any metanode of the T metatrees. However, if the SNPs that best tag the influential SNP, or best capture association with the phenotype, are spread over two or more blocks, none of these blocks is likely to produce an optimal multivariate split.
Prediction performance measured through the AUC of a ROC curve relies on data describing both true and predicted statuses for the individuals. We now examine whether, for a given dataset, a method with a higher AUC than the other method (Table 5) would also be the approach showing inflated top SNPs’ importance quantiles (Table 9). Indeed, in 10 cases out of 14, we observe consistency between Tables 5 and 9. Inconsistencies are pinpointed for datasets (CAD, Chr05), (CAD, Chr06), (RA, Chr06) and (T2D, Chr21). First, for 6 out of 10 cases for which the AUC is the highest for the hybrid approach, we observe that the highest importances distribution tail for the hybrid approach is over that of TTrees as from top200 or top300 quantile. In contrast, when the hybrid approach distribution is over TTrees’ as only from top10 quantile, we observe a higher AUC for TTrees (inconsistencies for (CAD, Chr05) and (RA, Chr06)). In the intermediary top quantiles (top100), the situation is mitigated, with dataset (CAD, Chr06) showing the lowest AUC for the hybrid approach (inconsistency) whereas it is the contrary for dataset (T1D, Chr13). To attempt to explain the fourth inconsistency, (T2D, Chr21), we compare this case with that of (T1D, Chr02). In the second case, TTrees’ importance values are inflated as from top25% quantile (except for top300 quantile) and TTrees’ AUC is the highest. The inconsistency of dataset (T2D, Chr21) might be explained by the fact that even though TTrees’s importance values are also inflated as from top25% quantile, top300 and top200 quantiles are missing. However, the explanation is not so simple as it would well explain the inconsistency for (CAD, Chr06) (both inflated distribution as from top100 quantile and lowest AUC for the hybrid approach) but not the consistency of (T1D, Chr13) (inflated distribution as from top100 quantile for the hybrid approach and highest AUC for the hybrid approach). With this latter unique restriction, for the datasets analyzed, we are inclined to draw two conclusions: the method with the highest importance distribution tail would also show the highest predictive power; the importances of the variables in the rank intervals top300 and top200 might be crucial for the method’s predictive power.
An additional conclusion is that both methods can be used to pinpoint various top SNPs, in a complementary way. However, complementarity can also be used to reinforce confidence in top SNPs. In the sequel, we will then focus on the SNPs identified in common by several of the compared methods, including the singleSNP GWAS.
Analysis of the SNP sets jointly identified by any two among the three, or the three methods compared on real data
We reported the SNPs jointly identified in the top 100s by any two or all three methods studied : singleSNP GWAS versus TTrees, singleSNP GWAS versus hybrid approach, TTrees versus hybrid approach, and the three methods. Additional file 2 (Supplementary data) describes these results for each of the 14 pairs (pathology, chromosome) analyzed. Figure 8 plots the cardinalities of the 3 pairwise intersections and of the threeway intersection for the 14 datasets analyzed. The Venn diagrams in Additional file 3 (Supplementary data) provide an intuitive insight of the trends observed across the 14 datasets analyzed. For an illustration, Fig. 9 focuses on the Venn diagrams of (BD, Chr03), (CD, Chr01) and (T2D, Chr10). (T2D, Chr10) is the dataset observed with the highest number of top 100 SNPs common to the three methods (23), over the 14 datasets. Together with datasets (CD, Chr20) and (T2D, Chr10), dataset (BD, Chr03) shows the largest number of top 100 SNPs common to singleSNP GWAS and TTrees (31). The largest number of top 100 SNPs common to SingleSNP GWAS and the hybrid approach (30) is observed for (CD, Chr01). Finally, (T2D, Chr10) is again the dataset for which the number of top 100 SNPs common to TTrees and the hybrid approach is the highest (60). The reader interested in a more thorough analysis of the number of top 100 SNPs common to any or the three methods is reported to the end of Additional file 3.
It was expected that the number of top 100 SNPs common to each TTreebased strategy and the SingleSNP GWAS would be smaller than the number of SNPs shared with the other TTreebased strategy. Indeed, the two categories of approaches, TTreesbased and SingleSNP, resort on quite different detection mechanisms. Random forestbased methods are certainly complementary to the SingleSNP approach. On the other hand, as the powers are quite similar and relatively high for TTrees and the hybrid approach as shown in the section relative to AUC comparison, we can also conclude that both methods are complementary. Any top 100 SNP detected by one of the two TTreesbased methods is anyway amongst the most relevant to discriminate between cases and controls. Furthermore, a SNP jointly pinpointed by TTrees and the hybrid approach should be prioritized for further biological analysis. Interestingly, among the latter SNPs, top 100s jointly identified by the three methods must be paid attention to. The number of such SNPs varies in the range [8, 23] over the 14 datasets analyzed.
Table 1 in Additional file 4 (Supplementary data) describes the characteristics of the SNPs jointly identified in the top 100s by each of the three methods. Additional file 4 then discusses these results in details. In the sequel, a SNP jointly identified in the top n SNPs by each of the three GWAS strategies is called a top*n SNP. For instance, a SNP identified in the top 20 SNPs by each method is called a top*20 SNP. For each dataset, we are interested in identifying the maximal integer m (m≤100), such that the top*100 SNPs are also top*m SNPs.
A summary of Additional file 4 is provided in Table 10. We observe that each time a top*1 SNP is detected (that is for datasets (BD,Chr03), (CAD,Chr05), (CD,Chr01), (RA,Chr06), (RA,Chr19) and (T2D,Chr21)), it is significantly associated with the disease, according to the SingleSNP GWAS strategy. Besides, 62 other top*m SNPs are also characterized with a significant pvalue. In addition, Table 10 allows to prioritize top*100 SNPs for further biological investigation: among the 211 top*100 SNPs detected over the 14 datasets, we identified 72 top*25 SNPs including 38 top*10 SNPs. Thus, an additional insight of our study is the interest to select top*m SNPs to prioritize a list of SNPs.
Finally, Table 11 displays the number of top 100 SNPs common to TTrees and the hybrid approach. For 8 datasets over 14, the correlation coefficient of the corresponding variable importances is greater than 0.93. Regarding the 6 other datasets, the correlation coefficient varies between 0.58 and 0.87.
Current limitation for applicability of the hybrid approach on a genomewide scale
The bottleneck to extend the hybrid approach to genomescale is the FLTM algorithm. In the machine learning domain, learning a latent tree is challenging in the high dimensional case. There exist \(O\big (2^{3n^{2}}\big)\) candidate structures to build a latent tree derived from n observed variables [39]. Learning the tree structure can only be efficiently addressed through iterative ascending clustering of the variables. Mourad and coworkers examined various such clusteringbased approaches and their limitations [40]. In the latter work, 15 methods were compared, including FLTM (named CFHLC in the paper cited) ([40], page 183). FLTM was the method with the highest scalability. On the other hand, it has to be noted that in its very first version (i.e. a windowbased version), FLTM was also tested on still larger datasets as in [40] (e.g. describing 100,000 variables [32]). For information, the WTCCC dataset (Crohn’s disease, chromosome 1) describes 37,267 SNPs. It is important to emphasize that the FLTM algorithm used in the present paper does not coerce the latent trees’ structure to binary structure, does not impose a userdefined shared cardinality for all latent variables, and does not require contiguity (on the genome) for the variables to be clustered. A flexible and thus faithful modeling of linkage disequilibrium comes at a cost in the present (yet optimized) version of FLTM: high memory consumption and high running time not allowing scalability to genome scale.
Conclusions and perspectives
In this paper, we put forth an hybrid approach combining two machine learning models, TTrees and FLTM, to enhance genomewide association studies. We compared the performances of the integrated approach and of TTrees based on simulated realistic genetical data. The integrated approach was shown to perform slightly better than TTrees for the additive and dominant genetic models. There is no subtantial advantage shown for the recessive model, except in few cases. We then performed a comparative analysis of the predictive powers and SNPs’ scores distributions, for TTrees and the hybrid FLTM / TTrees approach, on real datasets. The sophistication of TTrees through finer linkage disequilibrium modeling derived from FLTM is shown beneficial: on the datasets analyzed, the already high predicted power observed for TTrees is increased in the majority. The distributions of SNPs’ scores generated by TTrees and the hybrid approach are shown statistically different. In particular, in a vast majority of cases, the hybrid method outputs top ranked SNPs with relatively higher importances than in TTrees. Thus are pinpointed more interesting SNPs than in TTrees, to be provided as a short list of prioritized SNPs, for a further analysis by biologists. Not only did we show that both methods can be used to pinpoint various top SNPs, in a complementary way. Complementarity can also be used to reinforce confidence in top SNPs. Therefore, we analyzed the pairwise and threeway intersections of SNPs ranked in the top 100s, for the standard singleSNP GWAS, TTrees and the hybrid method. In particular, among the 211 top 100 SNPs jointly detected by the three methods, over the 14 datasets analyzed, we identified 72 and 38 SNPs respectively present in the top25s and top10s for each method.
In future work, we will extend the comparative study of TTrees and the hybrid approach to more GWAS datasets. In particular, we will examine whether a method with a higher predictive power than the other method also tends to show inflated top SNPs’ importance quantiles. This paper was fully dedicated to the thorough comparison of TTrees and the FLTM / TTrees approach. In the future, we plan the design of a vast study, to compare the hybrid approach to a panel of other approaches derived from machine learning such as logistic and penalized regressions, gradient boosting machines, ensemble methods, artificial neural networks, support vector machines and Bayesian networkbased analysis. Besides, since the FLTM model is the key to the improvement brought by the hybrid approach over TTrees, several directions focused on FLTM learning need be explored for future work. At a methodological level, we first plan to study (at a larger scale) whether the choice of the clustering algorithm used in FLTM learning impacts the conclusions of the present study. Second, a challenging perspective for future methodological work is to integrate consensus clustering in FLTM learning, especially as the downstream analysis at stake is a GWAS. Again, we plan to assess whether consensus clustering impacts the conclusion of the present study. Finally, at a technical level, the bottleneck to apply the hybrid approach on a genomewide scale is the scalability of the FLTM learning algorithm. Efforts will be deployed to break this technological limitation.
Appendix
Abbreviations
 AUC:

Aera under the curve
 DNA:

Deoxyribonucleic acid
 EM:

Expectationmaximization
 FLTM:

Forest of latent tree models
 GM:

Genetic model
 GRR:

Genetic relative risk
 GWAS:

Genomewide association study
 LCM:

Latent class model
 MAF:

Minor allele frequency
 ROC:

Receiver operator characteristic
 SNP:

Single nucleotide polymorphism
 WTCCC:

Wellcome trust case control consortium
References
Balding DJ. A tutorial on statistical methods for population association studies. Nat Genet. 2006; 7:781–90. https://doi.org/10.1038/nrg1916.
DeWan A, Klein RJ, Hoh J. Linkage disequilibrium mapping for complex disease genes. Methods Mol Biol. 2007; 376:85–107.
Patil N, Berno AJ, Hinds DA, Barrett WA, Doshi JM, Hacker CR, et al.Blocks of limited haplotype diversity revealed by highresolution scanning of human chromosome 21. Science. 2001; 294(5547):1719–23.
Liu N, Zhang K, Zhao H. HaplotypeAssociation Analysis In: Rao DC, Charles Gu C, editors. Genetic Dissection of Complex Traits: 2008. p. 335–405.
Akey J, Jin L, Xiong M. Haplotypes vs single marker linkage disequilibrium tests: what do we gain?Eur J Hum Genet. 2001; 9:291–300.
Bonizzoni P, Della Vedova G, Dondi R, Li J. The haplotyping problem: an overview of computational models and solutions. J Comput Sci Technol. 2003; 18(6):675–88.
Browning SR, Browning BL. Haplotype phasing: existing methods and new developments. Nat Rev Genet. 2011; 12:703–14.
Browning BL, Browning SR. Efficient multilocus association testing for whole genome association studies using localized haplotype clustering. Genet Epidemiol. 2007; 31:365–75.
He Y, Li C, Amos CI, Xiong M, Ling H, Jin L. Accelerating haplotypebased genomewide association study using perfect phylogeny and phaseknown reference data. PLoS ONE. 2011; 6(7):e22097. https://doi.org/10.1371/journal.pone.0022097.
Yang X, Yang Q, Zhao H, Yu W. HapBoost: a fast approach to boosting haplotype association analyses in genomewide association studies. IEEE/ACM Trans Comput Biol Bioinform. 2013; 10(1):207–12.
Harb N, Saghir MAR, Dawy Z, Valderrama C. FPGA based accelerator for bioinformatics haplotype inference application. In: Proc. International Conference on Bioinformatics & Computational Biology BIOCOMP (WORLDCOMP). Las Vegas, Nevada, USA: 2012;212:166.
Scheet P, Stephens M. A fast and flexible method for largescale population genotype data: applications to inferring missing genotypes and haplotypic phase. Am J Hum Genet. 2006; 78(4):629–44.
Szymczak S, Biernacka JM, Cordell HJ, GonzálezRecio O, König IR, Zhang H, et al.Machine learning in genomewide association studies. Genet Epidemiol. 2009; 33 Suppl 1:S51–7.
Okser S, Pahikkala T, Airola A, Salakoski T, Ripatti S, Aittokallio T. Regularized machine learning in the genetic prediction of complex traits. PLoS Genet. 2014; 10(11):e1004754. https://doi.org/10.1371/journal.pgen.1004754.
Waldmann P, Mészàros G, Gredler B, Fuerst C, Sölkner J. Evaluation of the lasso and the elastic net in genomewide association studies. Front Genet. 2013; 4(4):270. https://doi.org/10.3389/fgene.2013.00270.
Austin E, Pan W, Shen X. Penalized regression and risk prediction in genomewide association studies. Stat Anal Data Min. 2013; 6(4). https://doi.org/10.1002/sam.11183.
Roshan U, Chikkagoudar S, Wei Z, Wang K, Hakonarson H. Ranking causal variants and associated regions in genomewide association studies by the support vector machine and random forest. Nucleic Acids Res. 2011:1–8. https://doi.org/10.1093/nar/gkr064.
MotsingerReif AA, Ritchie MD. Neural networks for genetic epidemiology: past, present, and future. BioData Min. 2008; 1(3). https://doi.org/10.1186/1756038113.
Thomas A, Camp NJ. Graphical modeling of the joint distribution of alleles at associated loci. Am J Hum Genet. 2004; 74(6):1088–101.
Verzilli CJ, Stallard N, Whittaker JC. Bayesian graphical models for genomewide association studies. Am J Hum Genet. 2006; 79(1):100–12.
Touw WG, Bayjanov JR, Overmars L, Backus L, Boekhorst J, Wels M, et al.Data mining in the Life Sciences with Random Forest: a walk in the park or lost in the jungle?. Brief Bioinform. 2013; 14(3):315–26. https://doi.org/10.1093/bib/bbs034.
Breiman L. Random forests. Mach Learn. 2001; 45(1):5–32.
Goldstein BA, Hubbard AE, Cutler A, Barcellos LF. An application of Random Forests to a genomewide association dataset: methodological considerations & new findings. BMC Genet. 2010; 11:49. https://doi.org/10.1186/147121561149.
Nguyen TT, Huang JZ, Wu Q, Nguyen TT, Junjie Li M. Genomewide association data classification and SNPs selection using twostage qualitybased Random Forests. BMC Genomics. 2015; 16 Suppl 2:S5.
Kawaguchi A. Variable ranking by random forests model for genomewide association study. In: Proc International MultiConference of Engineers and Computer Scientists. Hong Kong: 2012. p. 16.
Botta V, Louppe G, Geurts P, Wehenkel L. Exploiting SNP correlations within random forest for genomewide association studies. PLoS ONE. 2014; 9(4):e93379. https://doi.org/10.1371/journal.pone.0093379.
Sinoquet C, Mourad R. Modeling linkage disequilibrium and performing association studies through probabilistic graphical models: a visiting tour of recent advances In: Sinoquet C, Mourad R, editors. Probabilistic graphical models for genetics, genomics, and postgenomics. Oxford University Press: 2014. p. 217–46.
Mourad R, Sinoquet C, Leray P. Probabilistic graphical models for genetic association studies. Brief Bioinform. 2012; 13(1):20–33. https://doi.org/10.1093/bib/bbr015.
Thomas A. A method and program for estimating graphical models forfig1 linkage disequilibrium that scale linearly with the number of loci, and their application to gene drop simulation. Bioinformatics. 2009; 25(10):1287–92. https://doi.org/10.1093/bioinformatics/btp146.
Abel HJ, Thomas A. Accuracy and computational efficiency of a graphical modeling approach to linkage disequilibrium estimation. Stat Appl Genet Mol Biol. 2011; 10(1):5. https://doi.org/10.2202/15446115.1615.
Edwards D. Modelling and visualizing finescale linkage disequilibrium structure. BMC Bioinformatics. 2013; 14:179. https://doi.org/10.1186/1471210514179.
Mourad R, Sinoquet C, Leray P. A hierarchical Bayesian network approach for linkage disequilibrium modeling and datadimensionality reduction prior to genomewide association studies. BMC Bioinformatics. 2011; 12(1):16. https://doi.org/10.1186/147121051216.
Breiman L. Bagging predictors. Machine Learning. 1996; 24(2):123–40.
Ho T. The Random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach. Intell. 1998; 20(8):832–44.
Dietterich TG. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, Boosting, and Randomization. Mach Learn. 2000; 40(2):139–57.
Botta V. A walk into random forests. Adaptation and application to GenomeWide Association Studies. Belgium: PhD Thesis, University of Liège.2013.
ZhiHua Z. Ensemble Methods: Foundations and Algorithms.Chapman and Hall/CRC; 2012.
Geurts P, Ernst D, Wehenkel L. Extremely randomized trees. Mach Learn. 2006; 36:3–42.
Zhang NL. Hierarchical latent class models for cluster analysis. J Mach Learn Res. 2004; 5:697–723.
Mourad R, Sinoquet C, Zhang NL, Liu T, Leray P. A survey on latent tree models and applications. J Artif Intell Res. 2013; 47:157–203.
Spencer CC, Su Z, Donnelly P, Marchini J. Designing genomewide association studies: sample size, power, imputation, and the choice of genotyping chip. PLoS Genet. 2009;:5:e1000477+.
The Wellcome Trust Case Control Consortium. Genomewide association study of 14,000 cases of seven common diseases and 3000 shared controls. Nature. 2007; 447:661–78.
Phan DT, Leray P, Sinoquet C. Modeling genetical data with forests of latent trees for applications in association genetics at a large scale. Which clustering should be chosen? In: Proc. International Conference on Bioinformatics Models, Methods and Algorithms, Bioinformatics2015. Portugal, Lisbon: 2015. p. 5–16.
Mekhnacha K, Ahuactzin JM, Bessière P, Mazer E, Smail L. Exact and approximate inference in ProBT.  Revue d’Intelligence Artificielle. 2007; 21(3):295–332.
Bessière P, Mazer E, Ahuactzin JM, Mekhnacha K. Bayesian Programming.Chapman and Hall/CRC; 2013.
Ester M, Kriegel HP, Sander J, Xu X. A densitybased algorithm for discovering clusters in large spatial databases with noise. In: Proc. International Conference on Knowledge Discovery and Data mining.1996. p. 226–31.
Tan PN, Steinbach M, Kumar V. Cluster analysis: basic concepts and algorithms. In: Tan PN, Steinbach M, Kumar V. Introduction to Data Mining.2005. p. 125–46.
Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143:29–36.
Acknowledgements
The author is grateful to the anonymous reviewers for constructive comments and help in improving the manuscript. The software development and the realization of experiments were performed in part at the CCIPL (Centre de Calcul Intensif des Pays de la Loire, Nantes, France).
Funding
This research was supported in part by the French National Research Agency (ANR SAMOGWAS project).
Availability of data and materials
The datasets supporting the conclusions of this article are the property of the WTCCC organization. The GWAS datasets of the WTCCC Consortium are delivered upon request and are issued subject to conditions by the WTCCC Consortium (https://www.wtccc.org.uk/).
Author information
Authors and Affiliations
Contributions
Not applicable because the paper was written by a single author (CS). The author read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The author declares that she has no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional files
Additional file 1
Comparison of the distributions of variable importances across the SNPs, obtained from TTrees and the hybrid FLTM / TTrees approach. Table S1: Range of variation, average and standard deviation for the distributions of variable importances across the SNPs, obtained from TTrees and the hybrid FLTM / TTrees approach. Table S2: Wilcoxon rank sum test and Pearson correlation coefficient, to compare the distributions of variable importances across the SNPs, obtained from TTrees and the hybrid FLTM / TTrees approach. Table S3 to S6: Comparison of the distributions of the variable importances, across the SNPs, for TTrees and the hybrid FLTM / TTrees approach, for the 14 datasets analyzed. (PDF 67.8 kb)
Additional file 2
SNPs jointly identified in the top 100s by two or three methods among singleSNP, TTrees and hybrid FLTM / TTrees approaches. For each of the 14 datasets analyzed, 4 tables provide the ranks, variable importances and pvalues of the SNPs in the top 100s jointly identified by: SingleSNP and TTrees approaches; SingleSNP and hybrid approaches; TTrees and hybrid approaches; SingleSNP, TTrees and hybrid approaches. (PDF 202 kb)
Additional file 3
SNPs jointly identified in the top 100s by two or three methods among singleSNP, TTrees and hybrid FLTM / TTrees approaches. Venn diagrams. For each of the 14 datasets analyzed, a Venn diagram shows the number of top 100 SNPs jointly identified by any two among the three, or the three methods considered. (PDF 122 kb)
Additional file 4
Characteristics of the SNPs jointly identified as top 100s by the SingleSNP, TTrees and hybrid FLTM / TTrees approaches. Table 1: Characteristics of the SNPs jointly identified as top 100s by the SingleSNP, TTrees and hybrid FLTM / TTrees approaches. (PDF 81.6 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Sinoquet, C. A method combining a random forestbased technique with the modeling of linkage disequilibrium through latent variables, to run multilocus genomewide association studies. BMC Bioinformatics 19, 106 (2018). https://doi.org/10.1186/s1285901820540
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1285901820540